NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance

NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance

January 29, 2026 | NSFOCUS

SANTA CLARA, Calif., Jan 29, 2026 – Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security, model security, application security, and identity security. It covers the entire lifecycle of LLMs, including the training, deployment, and application stages.

As AI Agents scale rapidly, their security and trustworthiness have become a focal point for the industry. Issues such as intent tampering, call chain poisoning, supply chain vulnerabilities, and compliance pressures are emerging as significant obstacles for enterprises. On January 22, 2026, NSFOCUS unveiled its latest innovations in LLM security in a new product launch. The event analyzed evolving AI application demands and introduced systematic solutions to address urgent security challenges. NSFOCUS also enhanced its AI Agent security capabilities, providing actionable, verifiable guidelines for secure AI deployment across industries.

The New AI LLM Risk Threat Matrix

During the conference, the NSFOCUS research team highlighted a shift in AI security from “content detection” to “intentional adversarial interactions”:

  • In 2024, AI security focused on “content adversarial” challenges, tackling dialogue security and mitigating compliance risks from excessive model output.
  • In 2025, the focus transitioned into the “protocol ecosystem” phase, extending risk exposure from dialogue endpoints to business systems as MCP protocol tools become widespread. The core challenge will be establishing trust within the call chain ecosystem.
  • By 2026, the security emphasis will shift to “intent sovereignty”, preventing attackers from hijacking perceptual information to manipulate deep intentions and commands.

Based on these industry trends, NSFOCUS officially released the newest AI LLM Risk Threat Matrix, which enables enterprises to accurately identify risk priorities, pinpoint core issues, and transition from “blind defense” to “precision governance”.

NSFOCUS added 14 new risks in the AI LLM Risk Threat Matrix, highlighting three major trends:

A surge in AI agent security risks

Growing challenges in multimodal security

Concentrated exposure of risks in MCP (Multi-Agent Communication Protocols)

The new matrix reflects the emergence of new security threats as AI security evolves from single-model systems to multi-agent collaboration and multimodal integration.

Identity and Privilege Security

  • Unauthorized Access to System Resources via MCP: Using MCP tools to achieve unauthorized access to sensitive system resources.
  • Privilege Escalation in Action Module: Failure in Agent Action module privilege management leading to operations exceeding authorized scope.
  • Multi-Agent Identity Spoofing: Forging Agent identities to bypass authentication mechanisms and access system resources.

Application System and Behavioral Security

  • MCP Tool Poisoning Attack: Injecting malicious prompts into MCP tool descriptions to manipulate model behavior.
  • MCP Hidden Instruction Attack: Hiding malicious instructions in tool descriptions via special tags or encoding.
  • MCP Carpet-bombing Scam: Dynamically modifying tool descriptions to implant malicious instructions after client authorization.
  • MCP Instruction Override Attack: Malicious instructions overriding legitimate tool functions to implement persistent backdoors.
  • Environment Injection Attack: Embedding malicious instructions into the external environment to indirectly induce Agents to perform unauthorized operations.
  • Unexpected Code Execution: Agents executing code operations beyond expectations, leading to system intrusion or data tampering.
  • Multi-modal Collaborative Injection Attack: Exploiting collaborative relationships across multiple modalities to embed malicious instructions.

Model Algorithm Security

  • Multi-modal Content Compliance Risk: Multi-modal models generating cross-modal non-compliant content to bypass detection mechanisms.
  • Intent Disruption & Goal Manipulation: Disrupting the Agent’s original intent and manipulating its behavioral goals through specific inputs.
  • Cross-modal Hallucination: Multi-modal models producing contradictory or fake content across different modalities, affecting decision quality.

Data Security

  • Cascading Hallucination Attack: Using multi-Agent shared memory mechanisms to spread erroneous information, leading to cognitive pollution and poisoning during Agent collaboration.

NSFOCUS LLM Protection Solutions

NSFOCUS, guided by the core philosophy of “AI-Native Security + Intelligent Operations”, leverages the technical expertise to build a multi-layered defense system covering the entire lifecycle of LLMs. It offers LLM security products and services that covers security of the model itself, integrity of training data, third-party components and supply chain security, plugin security and model outputs, helping enterprises implement AI strategy in compliance and security.

In the event, NSFOCUS has unveiled three new AI Agent Security Components:

  • AI Agent Asset and Risk Governance System: Enables fine-grained discovery and dynamic inventory of core AI agent components—including models, tools, MCP, knowledge bases, and prompts—to build comprehensive asset and risk profiles.
  • Runtime Intent and Behavior Security Protection for AI Agents: Leverages AI modeling of agent responsibility boundaries to monitor real-time interactions with MCP, tools, and external systems. It detects and automatically blocks risks such as unauthorized access and data leaks.
  • AI Agent Red Team Assessment and Continuous Validation Platform: Utilizes an AI-powered red team engine to generate targeted attack scenarios based on agent configurations and business contexts. Through single-round and multi-round dialogue simulations, it uncovers latent risks in depth.

From AI Copilot to AI Agent, as LLM applications move from collaborative assistance to autonomous execution and penetrate deeper into core business processes, the importance of security becomes increasingly prominent. NSFOCUS will continue to track the evolving risks and demands of AI applications, constantly optimizing its overall security solutions and upgrading products and services. NSFOCUS aims to transform security from a “concern” that hinders AI innovation into a “confidence booster” that drives business growth.