LLM

RSAC 2026 Innovation Sandbox | ZeroPath: From Alarm Accumulation to Executable Fixes

março 22, 2026

Company Profile ZeroPath is an AI-native application security startup founded in 2024, and its core products also use the eponymous brand ZeroPath. The company focuses on using AI to automatically discover, verify and fix code vulnerabilities, trying to break through the limitations of traditional SAST, SCA, Secrets scanning and IaC scanning that are fighting each […]

OpenClaw Security Issues: Add a “Security Guardrail” to Your AI Application

Uma imagem que ilustra dedos digitando em um teclado.

março 11, 2026

In 2026, AI intelligent agent technology will usher in a full-scale explosion. As a representative project, OpenClaw (formerly known as Clawdbot and Moltbot) is highly favored for its powerful capabilities-it can integrate multi-channel communication capabilities with large language models to build customized AI assistants with persistent memory and active execution capabilities, supporting local private deployment. […]

Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense

ilustração de como funciona a computação quântica.

fevereiro 26, 2026

On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […]

Analysis of the Attack Surface in the Agent Skills Architecture: Case Studies and Ecosystem Research

fevereiro 3, 2026

Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, Skills encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex […]

NSFOCUS AI-Scan Gains Recognition from Authoritative Institution

janeiro 22, 2026

SANTA CLARA, Calif., Jan 22, 2026 – Recently, International Data Corporation (IDC) released the report “China Large Language Model (LLM) Security Assessment Platform Vendor Technology Evaluation” (Doc#CHC53839325, October 2025). NSFOCUS was selected for this report based on its proven product performance and LLM security assessment methodology. With a comprehensive capability matrix built across model security, data […]

Securing the AI Revolution: NSFOCUS LLM Security Protection Solution

Uma imagem que ilustra a mão de um robo e um escudo que passa o sentimento de segurança.

dezembro 17, 2025

As Artificial Intelligence technology rapidly advances, Large Language Models (LLMs) are being widely adopted across countless domains. However, with this growth comes a critical challenge: LLM security issues are becoming increasingly prominent, posing a major constraint on further development. Governments and regulatory bodies are responding with policies and regulations to ensure the safety and compliance […]

Building a Full-Lifecycle Defense System for Large Language Model Security

outubro 2, 2025

Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company’s latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS’s concept and practices in strategy planning, scenario-based protection, technical products, and […]

Dive into NSFOCUS LLM Security Solution

Imagem que ilustra um vazamento de dados.

setembro 12, 2025

Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security assessment system (AI-SCAN) plays a […]

Prompt Injection: An Analysis of Recent LLM Security Incidents

Imagem que ilustra um vazamento de dados.

agosto 26, 2025

Overview With the widespread application of LLM technology, data leakage incidents caused by prompt word injections are increasing. Many emerging attack methods, such as inducing AI models to execute malicious instructions through prompt words, and even rendering sensitive information into pictures to evade traditional detection, are posing serious challenges to data security. At the same […]

NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment

Uma imagem que ilustra um cadeado que significa proteção cibernética.

julho 16, 2025

Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report. […]

Search

Inscreva-se no Blog da NSFOCUS