Interpretation of Guidelines for Secure AI System Development

Interpretation of Guidelines for Secure AI System Development

December 11, 2023 | NSFOCUS

Introduction

On November 26, 2023,  the Guidelines for secure AI system development was jointly released by the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), in collaboration with the US National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the cybersecurity and information security departments of over 10 countries, including Australia, Canada, New Zealand, Germany, France, and Japan. More than 10 organizations, including Amazon, Microsoft, IBM, and Google, participated in the formulation of these guidelines. The purpose of this guide is to provide guiding principles for any provider of systems using artificial intelligence (AI), helping them construct AI systems that work as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.

Necessity of Secure AI System Development

Artificial intelligence (AI) systems have the potential to bring numerous benefits to society, but to fully realize these benefits, it is essential to develop, deploy, and operate AI systems securely. Cybersecurity is a necessary prerequisite for the functional safety, resilience, privacy, fairness, effectiveness, and reliability of AI systems. However, in addition to conventional cybersecurity threats, AI systems are susceptible to unique vulnerabilities. For instance, attackers can exploit adversarial machine learning (Note: ‘AI’ in the guidelines refers specifically to machine learning applications) to induce abnormal behavior in machine learning systems, affecting the performance of models, prompting unauthorized operations, and allowing attackers to extract sensitive model information. Attack methods include injection attacks and data poisoning attacks. Therefore, secure design, development, deployment, operation, and maintenance are crucial for ensuring the security of the entire lifecycle of AI systems. The guide follows the “security by default” approach and adheres to the “security by design” principles. The principles prioritize:

  • taking ownership of security outcomes for customers
  • embracing radical transparency and accountability
  • making secure design a top business priority through organizational structure and leadership.

The guidelines provide recommendations for various stages, including design, development, deployment, and maintenance. In fact, many of the suggestions, aside from those specific to artificial intelligence, are also valuable references for other information systems.

Secure Design

The secure design guidelines provide guidance for the design phase of the AI system development life cycle. This includes understanding risks, threat modeling, and specific topics and trade-offs to consider in system and model design.

Raise staff awareness of threats and risks

System owners and senior leaders should understand AI threats and countermeasures, while data scientists and developers need sufficient knowledge of security threats and failure modes to assist risk bearers in making informed decisions. Guidance should be provided to users on the unique security risks faced by AI systems.

Model the threats to the system

As part of the risk management process, a holistic approach should be taken to assess threats to the system, including understanding the potential harm to the system, users, organizations, and society if a specific AI component is compromised or exhibits anomalous behavior.

Design system for security as well as functionality and performance

Confirm that the task is indeed suitable for AI solutions, evaluate AI-specific designs, and consider threat models and related countermeasures.

Whether developing in house or adopting external components, prioritize attention to supply chain security.

Consider security benefits and trade-offs when selecting AI model

The selection of artificial intelligence models involves a balance of various requirements. This encompasses the choice of model architecture, configuration, training data, training algorithms, and hyperparameters. Decisions must be based on an understanding of the threat model, and it is crucial to periodically reassess these decisions as both the progress in AI security research and the evolving understanding of threats unfold.

Secure Development

This section provides guidelines applicable to the development stage of the AI system development lifecycle, covering supply chain security, documentation, and asset and technical debt management.

Secure supply chain

Evaluate and monitor the security of the AI supply chain throughout the system lifecycle. Ensure that suppliers adhere to the same security standards applicable within your organization. For externally sourced systems, obtain them from validated and reliable sources, maintaining secure and well-documented hardware and software components to ensure system security.

Identify, track, and protect assets

Understand the value of AI-related assets to the organization, including models, data (including user feedback), prompts, software, documentation, logs, and assessments. Clearly identify the locations of these assets, assess and accept risks, and implement appropriate processes and controls to manage data accessible by AI systems.

Document data, models, and prompts

Document the creation, operation, and lifecycle management of any models, datasets, or meta-prompts or system prompts. Comprehensive documentation enhances transparency and accountability.

Manage technical debt

Like any software system, it is essential to identify, track, and manage ‘technical debt’ (technical debt is where engineering decisions that fall short of best practices to achieve short-term results are made, at the expense of longer-term benefits, which means leaving the issue for future resolution.)  Doing so can be more challenging in an AI context than for standard software, and the levels of technical debt are likely to be high due to rapid development cycles and a lack of well-established protocols and interfaces.

Secure Deployment

This section provides guidelines for the deployment stage of the AI system lifecycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.

Secure infrastructure

Follow good infrastructure security principles for each stage of the system lifecycle. During development and deployment, apply appropriate access controls to APIs, models, and data, including their training and processing flows. This includes isolating environments that store sensitive code or data, helping mitigate the harm of common cybersecurity attacks aiming to steal models or impair their performance.

Protect model continuously

Attackers may reconstruct the functionality or training data of a model by directly accessing it (via obtaining model weights) or indirectly accessing it (by querying the model through applications or services). Attackers may also tamper with models, data, or prompts during or after training to make the output untrustworthy. To protect models and data from direct and indirect access, standard cybersecurity best practices and controls on the query interface should be implemented to detect and prevent access, modification, or leakage of confidential information.

Develop incident management procedures

Since absolute security is unattainable, security incidents are inevitable. Develop incident response, remediation, and post-event improvement plans. These plans should be regularly assessed with the development of the system and broader research. Critical digital assets should be backed up offline, and emergency response training exercises should be conducted regularly.

Release AI responsibly

Before releasing models, applications, or systems, conduct appropriate and effective security assessments. Clearly communicate known defects or potential failure modes to users.

Make it easy for users to do the right things

Evaluate each new setting or configuration option based on the business benefits it brings and any security risks it introduces. Ideally, the safest settings should be the only option. If it is necessary to configure options, set the default option to have widespread security in common threats, meaning it is secure by default.

Secure Operation and Maintenance

The secure operation and maintenance guidelines provide guidance for the run and maintenance phases of the AI system lifecycle. It offers operational guidance after deploying the system, including logging and monitoring, update management, and information sharing.

Monitor system’s behavior

Monitor the output and performance of models and systems to observe sudden and gradual changes in behavior that affect security. This helps identify potential intrusions and harm, determine their causes, or recognize natural data drift.

Monitor system’s inputs

Depending on privacy and data protection requirements, monitor and record system inputs (such as inference requests, queries, or prompts) to fulfill compliance obligations, auditing, investigation, and remediation in the event of leakage or misuse. For instance, monitoring inputs can detect out-of-distribution scenarios and adversarial inputs (such as cropping and resizing images).

Follow a secure by design approach to updates

Generally, each product should include automatic updates and use secure modular update mechanisms for distribution. The update process (including testing and evaluation mechanisms) should reflect changes in system behavior caused by alterations to data, models, or prompts. Users should be supported in evaluating and responding to model changes (e.g., providing preview access and different versions of the API).

Collect and share lessons learned

Engage in information-sharing communities, collaborate in global industry, academic, and government ecosystems, and appropriately share best practices. Maintain open communication channels to receive feedback on system security internally and externally, allowing security researchers to study and report vulnerabilities.

This interpretation provides an overview of the Guidelines for Secure AI System Development for general readers. AI experts or AI system developers may refer to the original document for more detailed and comprehensive information. Additionally, the references listed in the guidelines’ endnotes can serve as supplementary reading material for interested readers to explore further.