Advancing the frontiers of artificial intelligence through fundamental research and open collaboration.
Developing next-generation language models with improved reasoning, efficiency, and alignment with human values.
Explore research →Ensuring AI systems are safe, robust, and aligned with human intentions through interpretability and testing.
Explore research →Creating more efficient training and inference methods to reduce computational costs and environmental impact.
Explore research →Building models that understand and generate across text, images, audio, and video modalities.
Explore research →Developing AI agents that can plan, reason, and execute complex tasks with minimal human intervention.
Explore research →Advancing federated learning and differential privacy techniques for secure AI applications.
Explore research →We discover new scaling laws that enable 3x more efficient training of large language models by optimizing compute allocation.
A novel approach to AI alignment that trains models to follow principles through iterative self-improvement.
State-of-the-art text embeddings that achieve superior retrieval performance with 10x smaller model size.
Comprehensive study of mixture-of-experts architectures and best practices for training at trillion-parameter scale.
We believe in advancing AI through open collaboration. Explore our open-source projects.
High-performance transformer implementations optimized for production inference.
Fast and memory-efficient attention mechanisms for transformers.
Comprehensive evaluation framework for language models.
Chief Scientist
AI Safety & Alignment
Research Director
Large Language Models
Senior Researcher
Multimodal Learning
Senior Researcher
Efficient Training
Work on the most challenging problems in AI with world-class researchers.