AI security

Building a Full-Lifecycle Defense System for Large Language Model Security

October 2, 2025

Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company’s latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS’s concept and practices in strategy planning, scenario-based protection, technical products, and […]

Dive into NSFOCUS LLM Security Solution

Imagem que ilustra um vazamento de dados.

September 12, 2025

Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security assessment system (AI-SCAN) plays a […]

Prompt Injection: An Analysis of Recent LLM Security Incidents

Imagem que ilustra um vazamento de dados.

August 26, 2025

Overview With the widespread application of LLM technology, data leakage incidents caused by prompt word injections are increasing. Many emerging attack methods, such as inducing AI models to execute malicious instructions through prompt words, and even rendering sensitive information into pictures to evade traditional detection, are posing serious challenges to data security. At the same […]

NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment

Uma imagem que ilustra um cadeado que significa proteção cibernética.

July 16, 2025

Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report. […]

LLMs Are Posing a Threat to Content Security

Imagem que ilustra funcionários usando inteligência artificial na empresa.

March 4, 2025

With the wide application of large language models (LLM) in various fields, their potential risks and threats have gradually become prominent. “Content security” caused by inaccurate or misleading information is becoming a security concern that cannot be ignored. Unfairness and bias, adversarial attacks, malicious code generation, and exploitation of security vulnerabilities continue to raise risk […]

Build Your AI-Powered Penetration Testing Scheme with DeepSeek + Agent: An NSFOCUS Practice

Uma imagem que ilustra dedos digitando em um teclado.

February 20, 2025

Dilemma of Traditional Automated Penetration Testing Penetration testing has always been the core means of offensive and defensive confrontation for cybersecurity. However, traditional automatic penetration tools face three major bottlenecks: lack of in-depth understanding of business logic, insufficient ability to detect logical vulnerabilities, and weak ability to link vulnerabilities. Although the passive scanning engine can […]

Insights from the DeepSeek Malicious Software Package Incident: Why Software Supply Chain Security Matters in Global AI Technology Competition

February 11, 2025

Background With the widespread application of AI technology, software supply chains are facing more complex and diverse security threats. Since January 2025, DeepSeek, as an emerging force in China’s AI industry, has suffered from series of cyberattacks. According to the analysis by NSFOCUS Security Lab, most attacks are from IP addresses in the United States. […]

Search

Subscribe to the NSFOCUS Blog