OpenAI launched GPT-5.4-Cyber on April 14—a fine-tuned version of its GPT-5.4 flagship model built specifically for defensive cybersecurity work. Unlike standard GPT-5.4, this "cyber-permissive" variant has lower refusal boundaries for legitimate security tasks including vulnerability research, binary reverse engineering, and zero-day analysis. The rollout is gated through OpenAI's expanded Trusted Access for Cyber (TAC) program, which now admits thousands of verified individual defenders and hundreds of security teams. The headline new capability: security professionals can analyze compiled software for malware and vulnerabilities without needing the original source code—a major time-saver for incident response. For context: Anthropic's Claude Mythos Preview found 27-year-old bugs in OpenBSD and chained Linux kernel vulnerabilities for privilege escalation; GPT-5.4-Cyber is OpenAI's direct response one week later. OpenAI's Codex Security has already contributed fixes to more than 3,000 critical and high-severity vulnerabilities in the wild. Access requires identity verification; higher TAC tiers unlock more permissive capabilities. The strategic rationale mirrors Anthropic's: by deploying these models exclusively to verified defenders, both companies are positioning themselves as the infrastructure layer for enterprise security—a market worth over $200 billion annually.
Read original article →
Weekly Newsletter
Get the best AI tools delivered weekly.
No spam, unsubscribe anytime.
✓ You're subscribed! Look out for next week's edition.