Securing the Next Wave of Federal AI Adoption

November 24, 2025

Federal agencies are accelerating their adoption of artificial intelligence (AI) as they look to enhance their mission outcomes. During this process, however, many agencies overlook a critical reality – AI applications are only as trustworthy as the security frameworks that protect them. While AI offers transformative capabilities, it also expands your attack surface, creating new opportunities for data leaks, bad actors, and insider threats.

At a rapid pace, Federal agencies are integrating AI into cloud and hybrid environments, whether its predictive maintenance in defense systems or medical diagnostics in healthcare research. Unfortunately, this innovation often outpaces security governance. So, what does this mean for the organization?

  • AI Model Exposure: Models are vulnerable to theft or poisoning when Application Programming Interfaces (API) and data pipelines are left unsecured.
  • Shadow AI Risk: Departments deploy unapproved AI tools that lack FedRAMP® authorization or monitoring.
  • Data Exfiltration: Sensitive data can leak through unsecured endpoints, unmanaged SaaS tools, or employees mishandling data used for training or inference.
  • Pressure to Comply: Agencies must maintain compliance with EO 14158, FITARA, and FedRAMP® standards while deploying AI tools.

This issue is so widespread that the National Security Agency (NSA) recently published two Cybersecurity Information Sheets (CSI) to address these security concerns:

Easing the Pain of AI Adoption: Zscaler and Four Points Technology’s Partnership

  • Four Points Technology: As a trusted Service-Disabled Veteran-Owned Small Business (SDVOSB) Value-Added Reseller, Four Points Technology bridges mission requirements with Zscaler’s advanced security technology stack. Through established contracts, such as GSA and SEWP, Four Points Technology simplifies procurement and deployment of secure AI infrastructure across federal environments.
  • Zscaler: With its Zero Trust Exchange solution, Zscaler is redefining how AI workloads are protected across distributed networks. This platform uses AI-driven threat detection, data protection, and adaptive access controls to safeguard sensitive models and data whether they reside in the cloud or on-premises.

The MITRE ATT&CK® Framework

Maintaining compliance with standards like FISMA and NIST 800-53 is a requirement agencies need to consider as they work to adopt, secure, and defend their AI systems. To maintain this compliance, Zscaler and Four Points Technology incorporate the MITRE ATT&CK® Framework. This framework maps attack tactics and techniques (i.e. reconnaissance, exfiltration, etc.) to specific NIST 800-53 control families, enhancing traceability and defensibility:

  • Reconnaissance: Scanning of AI APIs and endpoints
    • NIST Controls: CA-7, RA-5
    • Mitigation: Continuous posture monitoring with Zscaler Cloud Security
  • Execution: Injection of malicious training scripts
    • NIST Controls: SI-3, SI-4
    • Mitigation: Inline inspection via Zscaler Zero Trust Exchange
  • Persistence: Compromise of service account credentials
    • NIST Controls: AC-2, IA-2
    • Mitigation: Identity validation and conditional access
  • Exfiltration: Model data theft via unauthorized API calls
    • NIST Controls: SC-7, SC-8
    • Mitigation: Data Loss Prevention (DLP) and SSL inspection
  • Impact: Tampering with model outputs
    • NIST Controls: IR-4, SI-7
    • Mitigation: Behavioral analytics and anomaly detection

Incorporating MITRE ATT&CK® into AI governance bridges the gap between compliance and real-world defense. Agencies can operationalize compliance, reduce unauthorized AI usage, and retain audit readiness while achieving full visibility and control.

Additionally, alignment with government policies and legislation means funding for secure AI infrastructure in defense, healthcare, and research sectors. Consider the recently published U.S. Government Accountability Office (GAO) report, which shows that, in the past two years, the 11 selected agencies increased their number of AI use cases twofold (571 in 2023 to 1,110 in 2024) and generative AI use cases ninefold (32 in 2023 to 282 in 2024). The report also included comments from agency officials, which echo the concerns we laid out earlier:

  • Challenges in adopting Generative AI:
    • Complying with existing federal policies and guidance
    • Having sufficient technical resources and budget
    • Maintaining up-to-date appropriate use policies
  • The rapid evolution of AI technologies can complicate the establishment of generative AI policies.

Four Points Technology and Zscaler aim to directly address these concerns by providing agencies with a comprehensive, compliant, and continuously adaptive security model that ensures AI innovation and national mission success. If you are interested in learning more about this solution or partnership, please reach out to our team.

Search