In 2023, countries worldwide continued to strengthen their cybersecurity capabilities and systems in response to their national needs, using regulatory means to enhance their cybersecurity management. Based on continuous tracking and research, NSFOCUS summarized the development of global cybersecurity regulations and policies in 2023, hoping to provide valuable insights and guidance for stakeholders, policymakers, and cybersecurity professionals navigating this dynamic landscape.
The series includes four aspects: “Network Security,” “Data Security,” “Privacy Protection,” and “Tech Development and Governance,” with content organized in chronological order.
This article provides an overview of regulations and policies related to cybersecurity technology, covering various aspects, including generative artificial intelligence, facial recognition technology, zero trust, and post-quantum cryptography.
The United States Cybersecurity and Infrastructure Security Agency (CISA) released the “Zero Trust Maturity Model” (2nd edition), enhancing the implementation specifications of Zero Trust in the field of homeland security.
The U.S. government has placed significant strategic emphasis on Zero Trust in recent years. Various federal departments have successively issued a series of policies and regulations related to Zero Trust, such as the OMB Memorandum M-22-09, Moving the U.S. Government Toward Zero Trust Cybersecurity Principles by the White House, the Embracing a Zero Trust Security Model by National Security Agency, the DoD Zero Trust Strategy, the Zero Trust Architecture by NIST, and Applying Zero Trust Principles to Enterprise Mobility by CISA.
The updated Zero Trust Maturity Model further refines the implementation specifications of Zero Trust in the field of U.S. homeland security. Building upon its first edition Zero Trust Maturity Model 1.0, it provides updates on Zero Trust maturity stages, key pillar functionalities, assessment indicators, and descriptions.
With the explosive popularity of ChatGPT on the internet, generative AI technology and its security issues have increasingly become a global focus. Countries around the world have enacted or updated regulations and policies to strengthen the development and security management of artificial intelligence. For example, the United States released the AI Accountability Policy Request for Comment in April 2023, and the European Union introduced the Cybersecurity of AI and Standardisation in March 2023.
In the field of artificial intelligence technology, China has also introduced policy documents such as the Guiding Opinions on Accelerating Scenario Innovation to Promote High-Quality Economic Development with High-Level Applications of Artificial Intelligence, the Development Plan for the Next-Generation Artificial Intelligence, and the Guidelines for the Construction of the National Standard System for the Next-Generation Artificial Intelligence. These policies strengthen macro deployments in terms of strategy, industry, standards, and other aspects to promote the development of artificial intelligence.
The Measures released this time have three significant characteristics compared to previous regulations on artificial intelligence.
- It specifies the object of regulation by focusing on “generative artificial intelligence products” and proposes specific supervisory measures.
- It highlights the theme of promoting development, emphasizing the obligations of service providers and making principled provisions on legal responsibilities.
- It introduces a new regulatory model of “admission + obligation + responsibility,” with “security assessment” and “algorithm filing” as basic requirements for admission. Regarding obligations, it outlines three main obligations: technical, service, and customer management. Regarding responsibility, it mainly clarifies the basis of legal responsibilities.
Since the Biden administration took office, there has been a high emphasis on the construction of the international standard system, especially in emerging technologies. The administration aims to use this to enhance its influence in related fields and maintain the United States’ leadership in global technological innovation.
The Strategies brings three points of consideration.
Firstly, standards are not only a means of regulating and promoting technological development but also a crucial battleground for conducting technological competition. The international discourse power in the field of technical standards has become one of the core indicators reflecting the technological innovation competitiveness of various countries.
Secondly, the “strategy” is not limited to technology itself but covers many key elements related to technology. These elements often play a more decisive role in the development of standards, as emphasized in the “strategy” regarding investment, cooperation, talent, and other aspects.
Thirdly, as a global leader in technology, the United States’ technology standard strategy has important reference significance for other countries. The highlighted technical framework and key technological directions proposed in the “strategy,” including semiconductors, microelectronics, artificial intelligence, etc., are already or are guiding the direction of global technological innovation and transformation.
Looking at the timeline, the 2023 edition of the “National Artificial Intelligence Research and Development Strategic Plan” is an update to the 2019 edition. It reiterates the previous eight strategic goals, adjusts and improves specific priority matters for each strategy, and adds a ninth strategy emphasizing international cooperation. The release of this updated version is not only in line with the regular practice of evaluating and adjusting the U.S. artificial intelligence strategy every three years but also reflects the latest requirements and development focus of the U.S. government regarding artificial intelligence technology.
Comparing policies across horizontal sectors, various U.S. government departments have recently issued a series of policies and actions related to artificial intelligence, each with different focuses. For example, in the field of standard construction, the U.S. National Institute of Standards and Technology (NIST) released the AI Risk Management Framework. Regarding AI tool reviews, the U.S. National Telecommunications and Information Administration issued the AI Accountability Policy Request for Comment. In terms of institutional setup, the U.S. Department of Homeland Security announced the establishment of a DHS Artificial Intelligence Task Force, and the U.S. National Science Foundation allocated $140 million to establish seven new artificial intelligence research institutes. Regarding AI training, the U.S. Department of Education released a report named Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations, summarizing the opportunities and risks for AI in teaching, learning, research, and assessment based on public input. Top of Form
The memorandum serves as a means for the United States to maintain policy consistency and leverage the dual strategic value of software supply chain policies. On one hand, the internal value of software supply chain policy lies in safeguarding the security and continuity of its own software supply chain. On the other hand, the external value of software supply chain policy serves as a strategic tool or leverage to maintain U.S. dominance in related technology industries, often accompanied by common management measures such as export controls.
The memorandum also reflects the current practical implementation of software supply chain security regulation by U.S. federal agencies. Looking at the timeline of Executive Order 14028, the implementation progress of some crucial tasks has lagged. The memorandum also extends the deadline for federal agencies to obtain certification documents from software manufacturers, providing some flexibility for the practical implementation of the rules.
Given the United States’ leading position in various information technology fields, its practices related to software supply chain security management are likely to have a demonstrative effect within a certain scope. To some extent, these practices may encourage other countries to enhance their own software supply chain management mechanisms and standards while strengthening their capabilities in software supply chain development.
Seven Chinese government departments, including the Cyberspace Administration of China, jointly issued the “Interim Measures for the Management of Generative Artificial Intelligence Services,” strengthening the regulation of generative artificial intelligence.
Currently, generative AI primarily faces three types of security risks.
- Data: Potential risks include data quality, data protection mechanisms, and data authenticity.
- Algorithms: Potential risks involve cognitive security issues.
- Computing power: Potential risks include cost issues and ecological problems.
The Interim Measures are among the few formally effective special management systems for generative AI globally, providing an essential blueprint and solution for the regulation of generative artificial intelligence systems and even legislative practices.
From the perspective of the cybersecurity industry, the implementation of the Interim Measures will have positive effects.
Firstly, the clarification of security requirements will bring incremental market opportunities for the cybersecurity-related industry. This includes compliance and security assessments of training data, protection of data and personal information during service provision, security technical support during supervision and inspection, and related security and trust support.
Secondly, the strengthening of development elements will empower technological innovation in the cybersecurity industry. Elements such as a “Public Training Data Platform” and “AI Infrastructure” address issues related to computing power and data, significantly alleviating enterprises’ capability gaps in the development of generative AI. The construction of the generative artificial intelligence ecosystem will also inject new vitality into enterprises’ generative AI development.
Adapting to the evolving landscape of cybersecurity risks, NIST’s “Cybersecurity Framework” undergoes dynamic updates. In February 2014, NIST released Version 1.0 of the Framework for Improving Critical Infrastructure Cybersecurity (CSF). In April 2018, NIST followed up with the release of “Framework 1.1.” In April 2023, NIST introduced the core discussion draft of “Framework 2.0.”
The newly released “Framework 2.0” primarily updates “Framework 1.0” in the following three aspects:
- Expanding the scope: “Framework 2.0” is renamed “Cybersecurity Framework,” signaling an expansion beyond critical infrastructure. This reflects a broadening of the framework’s applicability.
- Strengthening cybersecurity governance: In “Framework 2.0,” the category of governance is adjusted to be on par with the other five core functions, emphasizing the importance the U.S. government places on cybersecurity governance.
- Emphasizing supply chain risk management: “Framework 2.0” provides more content on assessing and managing security risks in the supply chain, reflecting the increasing importance the U.S. government places on supply chain security and its commitment to ensuring the security of products and services obtained from suppliers and partners.
Quantum computing is one of the forefront areas in the current international competition in science and technology. Countries are strategically positioning the development of quantum technology at the national level. The focus of global attention includes quantum computing, quantum encryption, and post-quantum cryptography. In 2017, NIST initiated research on the standardization of post-quantum cryptographic algorithms. The draft standards published this time are for three of the four post-quantum cryptographic algorithms selected in July 2022, with the draft standards for the fourth algorithm scheduled to be released a year later.
Issues related to cybersecurity in post-quantum cryptography research are crucial for cybersecurity companies to focus on. For example, strengthening the tracking and observation of quantum attack technologies, enhancing monitoring and warning capabilities, formulating emergency response plans, and enhancing cybersecurity resilience by constructing comprehensive protection systems for critical systems in the transitional phase between two cryptographic systems.
The Cyberspace Administration of China, along with seven other departments, released the Provisional Measures for the Security Management of Facial Recognition Technology Applications (Draft for Comments), marking the first special policy draft for the security supervision of facial recognition technology applications.
Many countries have introduced corresponding regulatory policies for the development of facial recognition technology in recent years. For instance, the Federal Trade Commission in the United States issued a policy statement on abuse of biometric information and harm to consumers, and the European Data Protection Board released guidelines on the use of facial recognition technology in law enforcement.
The Provisional Measures represent China’s first draft of a special policy for the security of facial recognition technology applications. The proposed systems, including pre-impact assessments, record management, and device security testing, play a crucial role in establishing a sound security management system for facial recognition in China. However, the current “Provisional Measures” are more focused on the network level, and specific regulatory mechanisms for data-related issues, such as storage security and monitoring of personal information data usage, have not yet been established. This may be a direction for further improvement.Top of Form
Currently, AI security regulation is a focal point in the policy and regulatory construction of many countries. The U.S. federal government prioritizes AI in its National Cybersecurity Research and Development Strategic Plan (2019-2023). It has issued policy documents like the National AI Research and Development Strategic Plan (White House) and the AI Risk Management Framework (NIST), outlining regulations for AI interpretability, risk management, responsibility mechanisms, and dedicated agencies. The European Union has proposed the “AI Act” draft, focusing on the specific uses and risks of AI systems. China has included the “AI Law Draft” in the State Council’s 2023 legislative work plan and officially released the “Provisional Measures for the Management of Generative AI Services.”
The hearing on legislating on artificial intelligence proposes specific regulatory systems and mechanisms, such as establishing an independent regulatory body to audit model development companies, pushing AI development companies to take responsibility for model outputs, and using legal policies such as export controls to restrict the transfer of AI systems. These proposals have a demonstrative significance for enriching the current AI legislative practices in various countries, regulating and guiding the healthy development of related industries.
The “AI Roadmap” outlines specific measures CISA plans to implement in the field of AI security in cybersecurity. It aims to implement the Executive Order signed by the Biden administration on Safe, Secure, and Trustworthy Artificial Intelligence. The recent flurry of AI regulatory policies from the U.S. government, such as the National AI Research and Development Strategic Plan, AI Risk Management Framework, and the AI Research, Innovation, and Accountability Act of 2023, reflects the high importance the U.S. government places on AI security.
Previously, the U.S. government’s focus on AI was primarily on interpretability and traceability. The Roadmap further systematizes these focus areas and expands them to include various domains, including international cooperation, base protection, and talent cultivation.
The Ministry of Industry and Information Technology of China released the Notice on Printing and Distributing the Pilot Work Rules (Interim) and the Construction Guide for the Pilot Area of ‘5G + Industrial Internet’ Fusion Application.
Industrial Internet is the convergence point of the two major national strategies: “Building a Manufacturing Power” and “Building a Network Power.” The Chinese government has previously established a special working group for industrial Internet under the “Leading Group for Building a Manufacturing Power,” coordinating the promotion of the country’s industrial Internet development. The construction of the “5G + Industrial Internet” fusion application pilot area is one of the important goals of the annual work plan of the Industrial Internet special working group.
The Pilot Work Rules (Interim) consider “major network security incidents and major safety production accidents” as veto indicators for evaluating the pilot work of the leading area. This fully reflects the significant importance of network security in the overall work of the “5G + Industrial Internet” fusion application.
The AI Act is the world’s first comprehensive legislation on artificial intelligence. It continues the European tradition of legislation focusing on privacy information protection, strict illegal punishment, etc. After reaching a provisional agreement, relevant institutions will continue to refine and clarify the text at the technical level and submit the final text to member state representatives for approval. The act’s restrictive provisions on AI systems, regulatory requirements for high-risk models, and measures encouraging innovation such as the “regulatory sandbox” may provide inspiration and reference for AI legislation in other countries.