DeepSeek released a preview of its V4 model on April 25, 2026, a large language model with 1.6 trillion total parameters and 49 billion active parameters — making it the largest open-weight model publicly available, nearly double the size of its predecessor V3.2. On preliminary benchmarks, V4 shows meaningful improvements over V3.2 in reasoning, code generation, and long-context handling, continuing DeepSeek's pattern of releasing models that match or approach Western frontier lab performance at dramatically lower training cost. Unlike GPT-5.5 or Claude Mythos, V4 will be available for self-hosted deployment once fully released — which means cost-sensitive applications in writing assistance, document processing, and code generation can run V4 on their own infrastructure at a per-token cost that commercial API pricing cannot match. The muted market reaction compared to DeepSeek-V3's 2025 launch reflects that the AI market has absorbed the efficiency story — open-weight models at frontier capability are now an expected feature of the competitive landscape, not a shock.
Read original article →
Weekly Newsletter
Get the best AI tools delivered weekly.
No spam, unsubscribe anytime.
✓ You're subscribed! Look out for next week's edition.