Moderation API

Keep your platform safe with AI-powered content moderation. Detect harmful content, hate speech, violence, and more in real-time.

"Great product, love the design!"
Hate Safe
Violence Safe
Self-harm Safe
Sexual Safe

Detection Categories

Comprehensive coverage across harmful content types.

🚫

Hate Speech

Discrimination, slurs, targeted harassment

⚠️

Violence

Threats, graphic violence, weapons

💔

Self-Harm

Self-injury, suicide content

🔞

Sexual Content

Adult content, explicit material

👶

Child Safety

CSAM, child exploitation

💊

Illicit Substances

Drug promotion, illegal activities

🎭

Harassment

Bullying, intimidation, threats

📧

Spam

Spam, scams, phishing

Key Features

Real-Time Detection

Sub-100ms response times for instant moderation

Confidence Scores

Probability scores for nuanced decision-making

Multi-Language

Support for 20+ languages out of the box

Text + Images

Moderate both text content and images

Custom Thresholds

Adjust sensitivity per category

Batch Processing

Process thousands of items at once

Industry Applications

💬 Social Platforms

Keep communities safe by filtering harmful posts, comments, and messages in real-time.

🛒 E-Commerce

Moderate product reviews, seller profiles, and marketplace listings.

🎮 Gaming

Filter in-game chat, usernames, and user-generated content.

📚 Education

Ensure safe learning environments in educational platforms and forums.

💼 Enterprise

Moderate internal communications and collaboration tools.

📱 Dating Apps

Screen profiles, messages, and photos for safety.

Simple Integration

Python - Content Moderation
from mythicdot import MythicDot

client = MythicDot()

# Moderate a single piece of text
response = client.moderations.create(
    input="User submitted content to check..."
)

result = response.results[0]

if result.flagged:
    print("⚠️ Content flagged!")
    for category, flagged in result.categories.items():
        if flagged:
            score = result.category_scores[category]
            print(f"  - {category}: {score:.2%}")
else:
    print("✅ Content is safe")

# Batch moderation
texts = ["Comment 1", "Comment 2", "Comment 3"]
batch_response = client.moderations.create(input=texts)

for i, result in enumerate(batch_response.results):
    status = "⚠️ Flagged" if result.flagged else "✅ Safe"
    print(f"Text {i+1}: {status}")

Simple Pricing

FREE
for standard moderation

Protect Your Platform

Start moderating content for free. No credit card required.

Read the Docs Get Started Free