AI security

AI Infrastructure LiteLLM Supply Chain Poisoning Alert

março 27, 2026

Overview Recently, NSFOCUS Technology CERT detected that the GitHub community disclosed that there was a credential stealing program in the new version of LiteLLM. Analysis confirmed that it had suffered supply chain poisoning by the TeamPCP group on PyPI. It stole the publishing permission credentials by hacking into the security scanning tool Trivy used in […]

NSFOCUS Threat Intelligence: Building an OpenClaw Defense System with Multiple-Layer Protection

março 24, 2026

In 2026, AI agents are being widely used. OpenClaw has become a high-frequency efficiency improvement tool for enterprises and developers with its autonomous decision-making and local execution capabilities. However, several authoritative security agencies have recently issued warnings: OpenClaw is facing multi-dimensional security threats from supply chain poisoning to remote control. When internal employees privately deploy […]

RSAC 2026 Innovation Sandbox | Clearly AI: Automated Software Security Platform Empowered by AI

março 19, 2026

Company Profile Founded in 2024, Clearly AI is a company focused on automating enterprise security and privacy audits, headquartered in Seattle, Washington, USA. The company was co-founded by Emily Choi-Greene and Joe Choi-Greene, and the core team has deep practical and technical accumulation: CEO Emily worked at Amazon for 5 years, leading the Alexa AI […]

Insights into Claude Code Security: A New Pattern of Intelligent Attack and Defense

ilustração de como funciona a computação quântica.

fevereiro 26, 2026

On February 20, 2026, AI company Anthropic released a new code security tool called Claude Code Security. This release coincided with the highly sensitive period of global capital markets to AI technology subverting the traditional software industry, which quickly triggered violent fluctuations in the capital market and caused the fall of stock prices of major […]

Analysis of the Attack Surface in the Agent Skills Architecture: Case Studies and Ecosystem Research

fevereiro 3, 2026

Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, Skills encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex […]

NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance

ilustração de como funciona a computação quântica.

janeiro 29, 2026

SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security, […]

The Escalating AI Security Threat in the Cloud: NSFOCUS Protection Recommendations

Duas mãos utilizando um notebook.

janeiro 27, 2026

As AI applications fully embrace the cloud, emerging components and complex supply chains—while offering convenience—have also led to a sharp rise in risks from configuration flaws and vulnerability exploitation, making the AI security landscape in the cloud increasingly severe. In response to this trend, NSFOCUS conducted analysis of 48 typical global data breach incidents in […]

NSFOCUS AI-Scan Gains Recognition from Authoritative Institution

janeiro 22, 2026

SANTA CLARA, Calif., Jan 22, 2026 – Recently, International Data Corporation (IDC) released the report “China Large Language Model (LLM) Security Assessment Platform Vendor Technology Evaluation” (Doc#CHC53839325, October 2025). NSFOCUS was selected for this report based on its proven product performance and LLM security assessment methodology. With a comprehensive capability matrix built across model security, data […]

Building a Full-Lifecycle Defense System for Large Language Model Security

outubro 2, 2025

Santa Clara, Calif. Oct 2, 2025 – Recently, NSFOCUS held the AI New Product Launch in Beijing, comprehensively showcasing the company’s latest technological achievements and practical experience in AI security. With large language model security protection as the core topic, the launch systematically introduced NSFOCUS’s concept and practices in strategy planning, scenario-based protection, technical products, and […]

Dive into NSFOCUS LLM Security Solution

Imagem que ilustra um vazamento de dados.

setembro 12, 2025

Overview NSFOCUS LLM security solution consists of two products and services: the LLM security assessment system (AI-SCAN) and the AI unified threat management (AI-UTM), forming a security assessment and protection system covering the entire life cycle of LLM. In the model training and fine-tuning stage, the large language model security assessment system (AI-SCAN) plays a […]

Search

Inscreva-se no Blog da NSFOCUS