Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense
fevereiro 26, 2026
On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […]
Analysis of the Attack Surface in the Agent Skills Architecture: Case Studies and Ecosystem Research
fevereiro 3, 2026
Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, Skills encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex […]
NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance
janeiro 29, 2026
SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security, […]
The Escalating AI Security Threat in the Cloud: NSFOCUS Protection Recommendations
janeiro 27, 2026
As AI applications fully embrace the cloud, emerging components and complex supply chains—while offering convenience—have also led to a sharp rise in risks from configuration flaws and vulnerability exploitation, making the AI security landscape in the cloud increasingly severe. In response to this trend, NSFOCUS conducted analysis of 48 typical global data breach incidents in […]
NSFOCUS AI-Scan Gains Recognition from Authoritative Institution
janeiro 22, 2026
SANTA CLARA, Calif., Jan 22, 2026 – Recently, International Data Corporation (IDC) released the report “China Large Language Model (LLM) Security Assessment Platform Vendor Technology Evaluation” (Doc#CHC53839325, October 2025). NSFOCUS was selected for this report based on its proven product performance and LLM security assessment methodology. With a comprehensive capability matrix built across model security, data […]
Securing the AI Revolution: NSFOCUS LLM Security Protection Solution
dezembro 17, 2025
As Artificial Intelligence technology rapidly advances, Large Language Models (LLMs) are being widely adopted across countless domains. However, with this growth comes a critical challenge: LLM security issues are becoming increasingly prominent, posing a major constraint on further development. Governments and regulatory bodies are responding with policies and regulations to ensure the safety and compliance […]
Building a Full-Lifecycle Defense System for Large Language Model Security
outubro 2, 2025
Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company’s latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS’s concept and practices in strategy planning, scenario-based protection, technical products, and […]
Dive into NSFOCUS LLM Security Solution
setembro 12, 2025
Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security assessment system (AI-SCAN) plays a […]
Prompt Injection: An Analysis of Recent LLM Security Incidents
agosto 26, 2025
Overview With the widespread application of LLM technology, data leakage incidents caused by prompt word injections are increasing. Many emerging attack methods, such as inducing AI models to execute malicious instructions through prompt words, and even rendering sensitive information into pictures to evade traditional detection, are posing serious challenges to data security. At the same […]
The Invisible Battlefield Behind LLM Security Crisis
março 10, 2025
Overview In recent years, with the wide application of open-source LLMs such as DeepSeek and Ollama, global enterprises are accelerating the private deployment of LLMs. This wave not only improves the efficiency of enterprises, but also increases the risk of data security leakage. According to NSFOCUS Xingyun Lab, from January to February 2025 alone, five […]