Text Generation

Generate human-like text for any task. Chat, complete, summarize, translate, and more.

Quick Start

The Chat Completions API is the primary way to generate text. It takes a list of messages and returns a model-generated response.

Python
from mythicdot import MythicDot

client = MythicDot()

response = client.chat.completions.create(
    model="mythic-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's the capital of France?"}
    ]
)

print(response.choices[0].message.content)
# "The capital of France is Paris."

Message Roles

Each message in the conversation has a role that determines how the model interprets it.

system
Sets the behavior and context for the assistant. Placed at the beginning of the conversation.
user
Messages from the user/human. These are the prompts and questions you want answered.
assistant
Previous responses from the model. Used to provide conversation context.

Request Parameters

Parameter Type Description
model required string Model ID to use (e.g., "mythic-4", "mythic-4-mini")
messages required array List of messages in the conversation. Each message has a role and content.
temperature optional number Sampling temperature 0-2. Higher = more random. Default: 1
max_tokens optional integer Maximum tokens to generate. Default varies by model.
top_p optional number Nucleus sampling. Consider tokens with top_p probability mass. Default: 1
stream optional boolean Enable streaming responses. Default: false
stop optional string | array Up to 4 sequences where the model will stop generating.
response_format optional object Enable JSON mode or structured outputs.
tools optional array List of functions the model can call. See Function Calling.

Response Object

Response
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1704067200,
  "model": "mythic-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 8,
    "total_tokens": 33
  }
}
id Unique identifier for the completion
choices Array of completion choices. Usually 1 unless n > 1.
message The generated message with role and content
finish_reason "stop" (complete), "length" (max tokens), "tool_calls" (function call)
usage Token counts for billing: prompt, completion, and total

Available Models

mythic-4

Most capable model for complex tasks requiring advanced reasoning and creativity.

128K context 16K max output Vision capable

mythic-4-mini

Fast and cost-effective for most everyday tasks. Great balance of speed and quality.

128K context 16K max output 70% cheaper

mythic-4o

Optimized for instruction following and structured outputs. Best for agents.

128K context JSON mode Tool use

mythic-3.5-turbo

Legacy model. Use mythic-4-mini for better performance at similar cost.

16K context 4K max output Legacy

Best Practices

💡 Tips for Better Results

Use clear, specific instructions in your system prompt. Provide examples (few-shot prompting) for complex formats. Set temperature to 0 for deterministic, factual responses. Use temperature 0.7-1.0 for creative writing.