As the OpenClaw ecosystem continues to surge in popularity, more customers are deploying and utilizing these AI agents on a large scale. However, this growth has brought significant security challenges to the forefront, including over 33 documented CVE vulnerabilities, 288+ GHSA security advisories, the rise in malicious Skills, and frequent...
Tag: AI-scan
NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance
SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple...
NSFOCUS AI-Scan Gains Recognition from Authoritative Institution
SANTA CLARA, Calif., Jan 22, 2026 – Recently, International Data Corporation (IDC) released the report "China Large Language Model (LLM) Security Assessment Platform Vendor Technology Evaluation" (Doc#CHC53839325, October 2025). NSFOCUS was selected for this report based on its proven product performance and LLM security assessment methodology. With a comprehensive capability matrix...
Building a Full-Lifecycle Defense System for Large Language Model Security
Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company's latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS's concept and practices in strategy planning,...
Dive into NSFOCUS LLM Security Solution
Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security...
NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment
Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an...

