US Federal Government Best Practices for AI Protection

Government agencies and critical infrastructure companies have long used machine learning (ML) and artificial intelligence (AI) technology to automate sensor data collection (SCADA/IoT), complete repetitive tasks, and for predictive analytics — all of which can enhance operational efficiency. Protecting more complex dynamic AI elements within agency systems, critical infrastructure systems, and the defense industrial base (DIB) however, presents a brave new world of challenges for government. Nation states and bad actors are already adept at using artificial intelligence (AI) technology in cyberattacks of every kind – phishing, smishing, ransomware, data exfiltration, DDoS, etc. They aim to burrow in, take down, and manipulate government and private sector AI systems, models, algorithms, and training data for personal, political, and/or financial gains.

The Cybersecurity and Infrastructure Security Agency’s (CISA) “AI Guide for Critical Infrastructure” is designed to help government agencies secure AI systems against increasingly sophisticated threats. The CISA guidance is in response to Executive Order 14110, which mandates secure and trustworthy AI development and usage across the US government. The guide aims to help federal, state, and local agencies understand where AI is being developed and used in critical infrastructure and government operations, and what immediate steps can be taken to secure AI-driven systems, applications, and data.

Cyberattacks on AI Assets

Millions of people depend on their networked smart devices and applications each day to make a bank transaction, call 911, or renew their benefits directly via government websites. Behind the scenes, networked sensors monitor telecommunications networks, water systems, electricity consumption, and aid our military in protecting us. Monitoring metrics for uptime, availability, performance, and the security of legacy and hybrid cloud systems and applications is crucial to providing continuity of government operations (COOP) and critical disaster recovery (DR) operations in the event of a physical or cyberattack, emergency, or adverse event. But in an increasingly AI-driven world, staying ahead of adversaries is currently limited by legacy security monitoring, management (ITSM/ITOM/AIOps), and data backup/restoration tools. There are human and financial consequences of having this technical debt and complacency in migrating to newer, more secure cloud-native solutions. Adverse impacts of keeping the status quo include:

  • Financial damages from ransomware and fraud
  • Utility system outages and downtime (electric, gas, water, internet, etc.)
  • Compromised transportation systems security (smart cars, trains, air traffic, etc.)
  • Outcomes influenced by tampering with national security and industrial sector systems
  • Loss of crucial data, privacy, and intellectual property assets
  • Disruptions to government and business’ operations and online services

Use Robust Security Frameworks

The emphasis on having a structured, risk-based governance model in the CISA guidance is more relevant than ever. By instilling these best practices, agencies can continue to build a resilient and secure foundation for AI-driven operations realizing its potential for modernizing IT. CISA’s guide emphasizes having robust security frameworks to mitigate AI risks, stating they must also not stifle the very innovation that gives the US a competitive edge. Risk mitigation strategies should follow the Veeam aligned NIST Cybersecurity Framework, with an outline of best practices to help agency CIOs decide where to focus resources to optimize cybersecurity protection; that is to identify, detect, protect, respond and recover. In line with CISA guidance, Veeam recommends taking the following actions at a minimum:

  • Regularly update COOP/BCDR plans for radical resilience
  • Automate governance of AI systems to ensure compliance and safeguard privacy
  • Create and enforce policies that govern ethical AI usage

What Agencies Can do Right Now

CISA’s guidance instructs government agencies and critical infrastructure owners and operators to map, measure, govern, and manage AI technology, aligning it with the AI risk management framework developed by the National Institute of Standards and Technology (NIST). AI assets are embedded in critical infrastructure systems and are not only crucial to COOP, but high-value intellectual property that deserves protection.

Here are 13 actions agencies can take right now to begin to mitigate current and future risks to their AI-systems.

  1. Update COOP/BCDR plans: Adapt current disaster recovery plans to cover AI assets in all environments, adhering to the Veeam 3-2-1-1-0 backup rule, which involves zero-error backup storage, daily monitoring, and periodic test restores. Automated backups with instant restoration capabilities for COOP can help safeguard against data loss from various threats.
  2. Map AI dependencies: Map to understand relationships among AI components, third-party suppliers, APIs, and support systems to identify weak spots and response strategies.
  3. Assess AI risks: Work with Veeam to quantify AI system risks and their potential impact to operations and end-users. Employ NIST’s frameworks to identify vulnerabilities and implement preventive measures.
  4. Continuously audit and monitor AI systems: Automate regular audits of AI algorithms, models, training data, and usage to ensure systems are efficient, unbiased, immutable, and secure.
  5. Catalog AI applications: Document how and where AI is utilized within your agency to highlight potential security gaps. Share best practices with agency and private sector peers about protective measures that improve overall security posture.
  6. Proactively test for weaknesses: Automate regularly scheduled scans for and patching of vulnerabilities in AI systems to reduce the risk of exploitation.
  7. Protect data and privacy end-to-end: Anonymize data to protect personal information while allowing AI to learn and analyze patterns in a secure environment. Encrypt data to ensure it remains protected at rest and in transit (AES 256) which helps prevent unauthorized access. To reduce exposure risk, minimize the amount of data collected from users to only what is necessary for AI functions.
  8. Regulatory compliance: Ensure AI systems comply with GDPR regulations when it comes to handling EU citizens data and CCPA for California residents. Also monitor emerging AI-related Executive Orders, mandates, agency guidance, and regulations and comply in a timely manner.
  9. Employ ethical AI principles: (5 Pillars) Make AI decisions transparent, fair, and understandable to stakeholders to build trust as it is key to acceptance and use. Regularly audit AI systems to meet the five pillars test (explainability, fairness, robustness, transparency, privacy). Is what the AI is doing transparent and how it’s done easily explained to the average person? Does it protect privacy and fairness? Is it cloud scalable?
  1. Supply chain security: Evaluate third-party vendor risks to the supply chain of AI components, to verify component authenticity, and ensure integrity of AI system components will reduce the risk of supply chain attacks.
  2. AI lifecycle management: Version AI models to ensure traceability and manage updates effectively; Use mode decommissioning that includes procedures to safely decommission outdated or obsolete models.
  3. Human oversight and intervention: Implement Human-in-the-Loop (HITL) approaches where critical decisions made by AI systems are reviewed and approved by humans. Continuously monitor elements with modern data protection tools to detect anomalies and intervene when AI systems behave unexpectedly.
  4. Experiment with a Kubernetes architecture like Veeam Kasten: It’s trusted for its security advantages and is particularly suitable for decentralized and edge computing and enhancing performance and security for AI and machine learning applications (e.g. DoD’s CJADC2 initiative uses Kubernetes data protection to gain resilience at the tactical edge).

Following CISA’s guidance on structured governance for AI security is crucial. By adopting these practices, agencies secure AI infrastructures and make them resilient, paving the way for more advanced and efficient government services.

Find out more about how Veeam supports Data Backup and Protection for US Federal Government.

The post US Federal Government Best Practices for AI Protection appeared first on Veeam Software Official Blog.

from Veeam Software Official Blog https://ift.tt/HMGkypE

Share this content:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top