Skip to content

CMMC Is Getting an AI Upgrade: What Defense Contractors Need to Know

CMMC has consumed a lot of attention of those who do business with the Pentagon over the past few years. Now there's an AI layer coming on top of it. The FY2026 National Defense Authorization Act, signed in December 2025, includes a provision that directs the Department of Defense to develop a cybersecurity framework specifically for AI and machine learning systems and build it as an extension of CMMC. Here's what the new provision requires, why your current AI usage is already a CMMC issue, and what to do about both right now.

What the New Provision Actually Says

The provision is Section 1513 (PDF), formally titled "Physical and Cybersecurity Procurement Requirements for Artificial Intelligence Systems." It defines 'covered AI/ML technology' to include source code, model weights, training data, algorithms, and the software used to evaluate whether the AI is trustworthy. What this means in practice is that if you develop, deploy, store, or host any of that for DoD, you're a covered entity under this section.

What Section 1513 doesn't do is create a ready-made framework. Instead, it orders the Pentagon to build one that would address the following six categories of risk:

  • Workforce and insider threats: employees or contractors who misuse access to AI systems, whether intentionally or through lack of training.
  • AI-specific vulnerabilities: attacks like data poisoning, where bad data is fed into a model to corrupt its outputs, or unintended exposure of sensitive information through AI processes.
  • Supply chain risks: compromised datasets, tainted model weights, or counterfeit components introduced somewhere in the AI development pipeline.
  • Adversarial tampering: deliberate manipulation of hardware, software, data, or processes that support AI systems.
  • Data theft: targeted stealing of AI systems themselves, their training data, or their outputs.
  • Security monitoring: continuous assessment of the security posture of AI systems after deployment, not just at the point of acquisition.
  • Documented in your asset inventory
  • Described in your System Security Plan (SSP)
  • Included in your network diagrams
  • Microsoft 365 Copilot in GCC High reached general availability in December 2025 and operates under FedRAMP High authorization. This is the most straightforward path for contractors already in a GCC High environment.
  • Azure OpenAI Service in Azure Government and AWS Bedrock in GovCloud are also FedRAMP High authorized, though they require more technical lift to deploy.
  • On-premise AI deployments eliminate the cloud authorization question entirely but require real infrastructure investment.
  • Browser extensions on company-managed devices
  • SaaS subscriptions billed to corporate cards or expensed individually
  • AI features embedded in tools you already use (Microsoft Editor, Google's Gemini integration in Workspace, GitHub Copilot in developer environments)

The law also specifies that the framework must be risk-based, so the security requirements scale with how sensitive the AI system is to national security. For example, a facial recognition tool used for base access gets different treatment than an internal scheduling bot, which matters given the Pentagon's current push to move faster on AI across the board.

The good news is that this won't be a completely separate compliance track. Section 1513 says the framework should be built as "an extension or augmentation" of existing DoD cybersecurity frameworks, and it names CMMC directly. It also says DoD should draw on the NIST 800 series of publications, which is the same foundation that CMMC Level 2 already sits on. If you're working toward CMMC compliance now, the groundwork you're laying applies here too.

That also means the reverse is true. The same CMMC rules that govern how you handle Controlled Unclassified Information already have something to say about how your team uses AI tools.

Why Section 1513 Matters Even Before It Takes Effect

The CMMC rules Section 1513 will bolt onto already have plenty to say about how your team uses AI tools today, even if you're not building AI systems for the Pentagon.

Like most organizations, you're most likely using (or thinking about using) AI to draft proposals, proofread documents, clean up technical reports, and much more. Under the CMMC rules already in effect, such uses can create a compliance problem, one that Section 1513's framework will only make more explicit.

It all starts with the DoD's Level 2 scoping guide. A "CUI Asset" is anything that processes, stores, or transmits Controlled Unclassified Information. Every CUI Asset must be:

When an employee pastes a paragraph from a CUI document into a commercial AI chatbot, that service is now processing CUI. The scoping guide treats it as an External Service Provider within your assessment boundary. As such, it's subject to the same controls as any other system that handles CUI.

Under DFARS 252.204-7012, any cloud service provider that processes, stores, or transmits CUI must meet FedRAMP Moderate authorization at minimum. ChatGPT, Claude, Gemini, Grammarly, GitHub Copilot, and many others don't carry the necessary FedRAMP authorization for CUI. ChatGPT has achieved FedRAMP 20x Low accreditation, but Low isn't sufficient since CUI requires Moderate or High.

If your employees are using any of these tools with CUI, your organization is relying on a cloud service that doesn't meet the authorization requirements your CMMC assessment is built around.

However, there are three notable exceptions:

Beyond those three, everything else that is commercially popular falls outside the FedRAMP boundary.

If your organization submits a CMMC self-attestation while employees are sending CUI to non-FedRAMP-authorized AI services, you're asserting compliance you don't have, and that is False Claims Act territory.

What to Do Before June 2026

In June, Congress expects to see DoD's plan for the AI security framework, including timelines, milestones, and resource requirements. That's when the direction and pace of the coming requirements will become clear, but you don't need to (and shouldn't!) wait for the report to act.

As explained in the previous section, the CMMC rules that are already in effect apply to any AI tool that touches CUI, so it's in your best interest to get ahead of the problem now rather than scramble when the Section 1513 framework adds new requirements on top.

Run an AI Tool Inventory

You need to find out whether anyone in the organization is using a personal ChatGPT account on a company device, running CUI-adjacent tasks through Grammarly, or pasting contract language into any commercial AI service, for example.

A few places to look:

If a tool touches CUI, it belongs in your asset inventory. If it doesn't touch CUI, document that boundary and make sure the separation is enforced, not just assumed.

Block Commercial AI from CUI-Boundary Devices

Written policies telling employees not to use ChatGPT aren't enough. C3PAOs want to see technical controls, such as DNS filtering, Data Loss Prevention rules, CASB monitoring, or browser-level restrictions, that prevent CUI from reaching non-compliant services.

Update Your System Security Plan

Every AI tool within the assessment boundary must be documented in the SSP, including its role, data flows, and security posture. If you're using a FedRAMP-authorized AI service like Microsoft Copilot in GCC High, that relationship needs to be documented along with the provider's service description and shared responsibility matrix. If you're using no AI tools with CUI, document that too, because the assessor will ask.

Create an AI Acceptable Use Policy

Define which tools are authorized, what categories of information are off-limits for AI input, and what the approval process looks like for adopting new AI tools. Then train your employees on it. CUI handling rules don't change because the information is being processed by an AI instead of a person.

If you don't have an acceptable use policy in place yet, OSIbeyond's IT Security Policy Template is a good starting point to build from.

Evaluate Your FedRAMP-Authorized AI Options

If your team needs AI capabilities for work that involves CUI, the compliant paths right now run through Microsoft GCC High, Azure Government, or AWS GovCloud. Each carries its own cost and complexity. If you don't need AI for CUI-related work, the simpler path is to keep AI tools entirely outside the CUI boundary and enforce that separation with the technical controls described above.

How OSIbeyond Can Help

OSIbeyond is a CMMC Level 2 certified Managed Service Provider and Registered Practitioner Organization listed with the CyberAB. If you're unsure where your organization stands on AI tool usage, SSP documentation, or FedRAMP environment readiness, OSIbeyond's compliance team can walk you through it. Schedule a meeting with one of our CMMC Registered Practitioners to discuss your next steps.