Back to News
SecurityAI Understanding briefing

OpenAI and Anthropic Brief Congress on Advanced Cyber-Capable AI Models

AI labs are now briefing lawmakers on frontier models that can help defenders find vulnerabilities faster while also raising concern about misuse against critical infrastructure.

April 28, 202613 min read
OpenAI and Anthropic Brief Congress on Advanced Cyber-Capable AI Models

AI labs are now briefing lawmakers on frontier models that can help defenders find vulnerabilities faster while also raising concern about misuse against critical infrastructure.

The short version

OpenAI and Anthropic privately briefed House Homeland Security Committee staff on new cyber-capable AI models. The briefings focused on how advanced models could affect critical infrastructure, national security, and future AI regulation.

This is one of the clearest signs that frontier AI policy has entered a new phase. Lawmakers are no longer only discussing deepfakes, bias, copyright, or classroom cheating. They are being briefed on models that may accelerate cyber defense, vulnerability discovery, and potentially offensive misuse.

Why cyber-capable AI is different

Most AI risks involve scale: more spam, more synthetic media, more low-quality content, more automated scams. Cyber-capable AI is different because it can compress the time between discovering a weakness and acting on it. That matters for both defenders and attackers.

A capable model can help a security team read code, explain a vulnerability, reproduce a bug, prioritize patches, write detections, and understand a complicated incident. Those are valuable defensive uses, especially for small organizations that lack deep security staff.

The same skills can also help malicious actors. If a model can quickly find and explain critical flaws, it can lower the expertise needed to abuse those flaws. This is the dual-use problem at the center of the briefings.

What OpenAI and Anthropic are doing differently

Anthropic has held off on broad public release of Mythos Preview because of its ability to quickly find and exploit critical security flaws. That is a restrictive posture: keep the model away from broad access until the company and government are more comfortable with the safeguards.

OpenAI has taken a tiered approach with GPT-5.4-Cyber. The idea is to allow more permissive defensive use for vetted users and teams while limiting broad access to the most sensitive capabilities. That approach recognizes that defenders need stronger tools, but it also treats identity, purpose, and oversight as part of the product.

Neither approach is perfect. Withholding a powerful model may slow down defenders who could use it responsibly. Releasing through tiers may still create leakage, misuse, or inconsistent enforcement. The hard policy question is how much risk society should accept to improve defense.

Why Congress is paying attention now

The House Homeland Security Committee has been holding private roundtables and hearings on generative AI and national security. The OpenAI and Anthropic briefings followed other discussions about jailbroken models, including demonstrations that raised alarm among lawmakers.

The timing matters. Critical infrastructure sectors such as hospitals, water systems, utilities, local governments, and school districts are often under-resourced. They face sophisticated cyber threats without the staffing or budgets of large technology companies. If AI can help those defenders, Congress has a reason to support controlled access.

But the same sectors are also attractive targets. If offensive actors get better tools, the most vulnerable organizations may face more pressure, not less. That creates an urgent but uncomfortable policy tradeoff.

What defensive access should mean

Defensive access cannot simply mean that the user says they are a defender. It needs structure. A serious access program should verify identity, organization, use case, and authorization. It should log high-risk workflows, rate-limit sensitive actions, monitor for misuse patterns, and provide a way to revoke access quickly.

It should also distinguish between categories of security work. Explaining a vulnerability report is lower risk than generating exploit code. Reviewing internal code is different from scanning third-party systems. Helping a hospital patch known software is different from probing an unknown target.

The best systems will likely use tiered permissions. Basic defensive guidance can be widely available. More powerful workflows should require stronger verification. The highest-risk capabilities may need restricted programs, direct partnerships, or government oversight.

What safeguards matter most

The most important safeguard is not a single refusal rule. It is an ecosystem of controls: identity checks, usage monitoring, model behavior limits, red-team testing, incident reporting, and clear paths for researchers to report failures.

Logging is especially complicated. Security teams may need privacy and confidentiality when testing sensitive systems. At the same time, model providers need enough visibility to detect abuse. That tension will shape enterprise and government AI contracts.

Another key safeguard is context. A model should behave differently when a verified security engineer asks for help patching an internal system than when an anonymous account asks for steps to exploit a public target. Building that context into access policy is difficult, but necessary.

The China memo adds another layer

The briefings also touched on a recent White House memo accusing China of large-scale efforts to copy American AI models. That issue connects cyber-capable models to a broader national competitiveness debate.

If the U.S. restricts advanced models too much, defenders and companies may lose access to useful tools. If it releases too freely, adversaries may gain capabilities faster. If model weights, training methods, or specialized cyber capabilities leak, the strategic advantage may narrow.

This is why the government is increasingly treating frontier AI as infrastructure, not just software. The models are becoming part of national cyber posture.

Implications for schools, nonprofits, and small teams

Most small organizations will not receive direct access to the highest-risk cyber models. But they will be affected by the policy decisions around them. If controlled-access defensive tools mature, smaller organizations may eventually benefit through managed security providers, nonprofit cyber programs, or vendor-integrated protection.

In the near term, organizations should assume that both defenders and attackers will get faster. That means basic security hygiene matters more: multi-factor authentication, patching, backups, least-privilege access, staff training, and incident plans.

AI will not replace those fundamentals. It will raise the penalty for ignoring them because attackers can automate more reconnaissance and defenders will need cleaner systems to take advantage of AI-assisted protection.

What to watch next

Watch whether Congress moves from briefings to access rules. A federal AI framework could define what counts as high-risk cyber capability, when labs must notify government, and how companies should control access to models with advanced exploit or vulnerability-discovery skills.

Also watch how labs describe their own release policies. If Anthropic keeps Mythos restricted while OpenAI expands GPT-5.4-Cyber through trusted access, the industry will have two competing models for managing dual-use capability.

The important takeaway is not that AI will automatically make the internet unsafe. It is that cyber defense and cyber offense are both getting faster, and policy now has to govern speed, access, and accountability at the same time.