AI security

Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense

ilustração de como funciona a computação quântica.

February 26, 2026

On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […]

Analysis of the Attack Surface in the Agent Skills Architecture: Case Studies and Ecosystem Research

February 3, 2026

Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, Skills encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex […]

NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance

ilustração de como funciona a computação quântica.

January 29, 2026

SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security, […]

The Escalating AI Security Threat in the Cloud: NSFOCUS Protection Recommendations

Duas mãos utilizando um notebook.

January 27, 2026

As AI applications fully embrace the cloud, emerging components and complex supply chains—while offering convenience—have also led to a sharp rise in risks from configuration flaws and vulnerability exploitation, making the AI security landscape in the cloud increasingly severe. In response to this trend, NSFOCUS conducted analysis of 48 typical global data breach incidents in […]

NSFOCUS AI-Scan Gains Recognition from Authoritative Institution

January 22, 2026

SANTA CLARA, Calif., Jan 22, 2026 – Recently, International Data Corporation (IDC) released the report “China Large Language Model (LLM) Security Assessment Platform Vendor Technology Evaluation” (Doc#CHC53839325, October 2025). NSFOCUS was selected for this report based on its proven product performance and LLM security assessment methodology. With a comprehensive capability matrix built across model security, data […]

Building a Full-Lifecycle Defense System for Large Language Model Security

October 2, 2025

Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company’s latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS’s concept and practices in strategy planning, scenario-based protection, technical products, and […]

Dive into NSFOCUS LLM Security Solution

Imagem que ilustra um vazamento de dados.

September 12, 2025

Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security assessment system (AI-SCAN) plays a […]

Prompt Injection: An Analysis of Recent LLM Security Incidents

Imagem que ilustra um vazamento de dados.

August 26, 2025

Overview With the widespread application of LLM technology, data leakage incidents caused by prompt word injections are increasing. Many emerging attack methods, such as inducing AI models to execute malicious instructions through prompt words, and even rendering sensitive information into pictures to evade traditional detection, are posing serious challenges to data security. At the same […]

NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment

Uma imagem que ilustra um cadeado que significa proteção cibernética.

July 16, 2025

Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report. […]

LLMs Are Posing a Threat to Content Security

Imagem que ilustra funcionários usando inteligência artificial na empresa.

March 4, 2025

With the wide application of large language models (LLM) in various fields, their potential risks and threats have gradually become prominent. “Content security” caused by inaccurate or misleading information is becoming a security concern that cannot be ignored. Unfairness and bias, adversarial attacks, malicious code generation, and exploitation of security vulnerabilities continue to raise risk […]

Search

Subscribe to the NSFOCUS Blog