🔢 Token Counting

Count tokens before making API calls. Estimate costs, validate inputs, and optimize your prompts for efficiency.

Token Counter Demo

Sample Text

23 tokens
Hello, world! This is a sample text to demonstrate how tokenization works with our models.

Quick Reference

~4
characters per token
~0.75
words per token
1 page
≈ 500 tokens
1 book
≈ 100K tokens

Count Tokens with the API

Python
from mythicdot import MythicDot client = MythicDot() # Count tokens for a message response = client.messages.count_tokens( model="mythic-4", messages=[ { "role": "user", "content": "What is the meaning of life?" } ], system="You are a helpful assistant." ) print(f"Input tokens: {response.input_tokens}") # Estimate cost before calling cost_per_million = 3.00 # Example pricing estimated_cost = (response.input_tokens / 1_000_000) * cost_per_million print(f"Estimated input cost: ${estimated_cost:.6f}")

Token Conversion Reference

Content Type
Approximate Size
Tokens
Short email
~200 words
~250 tokens
Blog post
~1,000 words
~1,250 tokens
Long article
~5,000 words
~6,500 tokens
Code file (500 lines)
~15 KB
~4,000 tokens
Book chapter
~10,000 words
~13,000 tokens
💡 Token Optimization Tips
  • Avoid redundant whitespace — extra spaces and newlines add tokens
  • Use concise language in system prompts — fewer tokens = lower cost
  • Batch related content — one larger request often beats many small ones
  • Consider summarization for context — compress long histories
  • Use structured outputs (JSON) — often more token-efficient than prose

Optimize Your Token Usage

Count tokens to estimate costs and improve efficiency.

Tokenization Guide → Cost Management →