Models that think step-by-step before responding. Ideal for math, coding, science, and complex problem-solving.
Reasoning models use extended thinking to break down complex problems before producing a final answer. This chain-of-thought approach dramatically improves accuracy on tasks requiring multi-step logic.
Most powerful reasoning model. Excels at complex multi-step problems, research, and deep analysis.
Fast reasoning model optimized for STEM. Great for math, coding, and scientific tasks.
from mythicdot import MythicDot client = MythicDot() # Use reasoning model for complex problems response = client.chat.completions.create( model="mythic-o1", messages=[ { "role": "user", "content": """Write a Python function that finds the longest palindromic substring in a string using dynamic programming. Explain your reasoning.""" } ] ) print(response.choices[0].message.content)
Complex proofs, calculus, statistics, and word problems
Algorithm design, debugging, and code optimization
Research synthesis, data interpretation, hypothesis testing
Statistical reasoning, trend analysis, forecasting
Strategy development, project planning, decision trees
Multi-source synthesis, fact verification, analysis
| Parameter | Default | Description |
|---|---|---|
| reasoning_effort | medium | How much to think: "low", "medium", or "high". Higher = more tokens, better accuracy. |
| max_completion_tokens | varies | Total budget for reasoning + output. Includes hidden thinking tokens. |
| include_reasoning | false | Whether to return the thinking process in the response (beta). |
# Low effort - quick answers, simple problems response = client.chat.completions.create( model="mythic-o1-mini", reasoning_effort="low", messages=[{"role": "user", "content": "What's 15 * 23?"}] ) # High effort - complex problems, maximum accuracy response = client.chat.completions.create( model="mythic-o1", reasoning_effort="high", max_completion_tokens=50000, messages=[{"role": "user", "content": "Prove the Riemann hypothesis..."}] )
Reasoning models may take 10-60+ seconds for complex problems. They're not ideal for simple Q&A, chat, or tasks requiring low latency. Use standard models (mythic-4) for those use cases.