← Noma Blog

AI Security is Now National Security – The White House AI Action Plan

Published: Jul 24, 2025 · 4 min. read

The White House has just released its comprehensive America’s AI Action Plan,” marking a fundamental shift in how the United States approaches artificial intelligence development and deployment. The plan identifies more than 90 Federal policy actions across three pillars. 1. Accelerating Innovation, 2. Building American AI Infrastructure, and 3. Leading in International Diplomacy and Security.

For security professionals, this isn’t just another policy document. It’s a roadmap that will reshape cybersecurity requirements, compliance frameworks, and threat landscapes across every industry. It’s a clear message from the world’s leading nation acknowledging that AI security is national security, and that AI risk is real and requires urgent action. 

Five ways the AI Action Plan validates enterprise AI security

The plan introduces five key initiatives that validate the fundamental principles of AI security, including:

AI Red Teaming and an AI Evaluation Ecosystem: The plan emphasizes building a robust evaluation framework for AI systems. The DOD, DOE, CAISI at DOC, the Department of Homeland Security (DHS), NSF, and academic partners will solicit the best and brightest from U.S. academia to test AI systems for transparency, effectiveness, use control, and security vulnerabilities. 

Secure-By-Design AI Systems: The action plan strongly emphasizes the need for inherently secure AI systems, applications and agents. All use of AI in safety-critical or homeland security applications should entail the use of secure-by-design, robust and resilient AI systems that are instrumented to detect performance shifts, and alert to potential malicious activities like data poisoning or adversarial attacks.

Open-Source and Open-Weight Model Security: The plan acknowledges both the value and risks of open AI models. Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because they can be used flexibly without being dependent on a closed model provider and can be retrained for a specific goal. However, these models carry elevated security risks due to lack of embedded guardrails or enterprise governance.

Model Control, Interpretability, and Robustness: The plan calls for significant investments in making AI systems more controllable and interpretable. Launch a technology development program led by the Defense Advanced Research Projects Agency in collaboration with CAISI at DOC and NSF, to advance AI interpretability, AI control systems, and adversarial robustness.

AI Incident Response and Threat Sharing: Perhaps most importantly for security teams, the plan establishes formal AI incident response capabilities. By establishing an AI Information Sharing and Analysis Center (AI-ISAC), led by DHS, in collaboration with CAISI at DOC and the Office of the National Cyber Director, to promote the sharing of AI-security threat information and intelligence across U.S. critical infrastructure sectors.

What does the AI Action Plan mean for enterprise CISOs?

The AI Action Plan makes it clear that AI governance is moving from optional to mandatory across critical sectors and there will be policies implemented that organizations should understand. Here’s what security leaders should consider as regulations continue to emerge.

AI Action Plan What the AI Action Plan means for CISOs? Immediate Risk / Opportunity
AI systems will be treated as critical infrastructure All AI use cases in safety‑critical or homeland‑security applications should entail the use of secure‑by‑design, robust and resilient AI systems. Boards and regulators will expect runtime protection, incident response playbooks, posture risk assessment and continuous assurance for every production AI model.
Federal agencies will formalize red‑teaming and evaluations The AI Action Plan directs multiple agencies to coordinate AI testing initiatives for security vulnerabilities. Controls for AI runtime protection and AI red teaming will become table stakes for demonstrating due diligence in AI risk management.
Open‑weight models are encouraged but must be governed Government support for open-source and open-weight AI models come with expectations for proper oversight. Enterprises adopting OSS models will need an AI Bill of Materials (AIBOM), license compliance tracking, and AI supply chain risk controls.
AI incident response will be codified Led by NIST at DOC, including CAISI, will partner with the AI and cybersecurity industries to ensure collaboration in the establishment of AI standards, response frameworks, best-practices, and technical capabilities. SOC and IR teams must integrate model / agent-level telemetry and response capabilities immediately.

Enterprise AI security leads the way

The White House AI Action Plan underscores the fundamental reality that AI systems must be built with security commensurate with their value. As AI evolves from simple chatbots to autonomous agents capable of taking action across enterprise systems, the security stakes have never been higher. 

At Noma Security, we run ahead of the AI innovation curve by providing an AI security and governance platform that delivers secure AI infrastructure required for confident AI adoption. We do this while satisfying emerging AI regulatory requirements from the U.S. federal government and the EU AI Act, for example.

Our approach aligns directly with the White House’s vision of secure-by-design AI systems, offering integrated capabilities that address the specific requirements outlined in the federal action plan.

AI Discovery and Governance: Noma Security provides comprehensive visibility into AI usage across organizations, including model inventory management via an automated AIBOM, usage tracking, and policy enforcement across the entire AI landscape. These are critical capabilities for managing the open-source model adoption that the AI Action Plan encourages, and enables compliance across the AI ecosystem. 

AI Supply Chain Security: From an AI Bill of Materials (AIBOM) to license compliance and vulnerability scanning, Noma Security helps organizations maintain security and compliance across their entire AI supply chain, addressing the governance challenges of modern AI development.

AI red-teaming and Testing: The Noma Security platform enables organizations to proactively test their AI systems for the exact vulnerabilities that federal initiatives will target, from prompt injection and jailbreak attempts to data leakage and model manipulation.

AI Runtime Protection: Noma Security ensures that AI systems remain secure during operation by providing native attack detection and prevention capabilities at point of inference, prompt injection and model misbehavior, adhering to the “secure-by-design” requirements outlined in the AI Action Plan.

The time to act is now

The U.S. government has made it unmistakably clear; AI security is now national security.  artificial intelligence is now a critical infrastructure priority and AI innovation must, and will, continue to evolve for competitive advantages and economic gains. Secure-by-design AI is therefore a strategic imperative. 

If you are ready to get ahead of the AI security curve, contact us to learn how our comprehensive AI security platform can help your organization be ready for the AI future.