{"id":33300,"date":"2025-07-16T03:06:31","date_gmt":"2025-07-16T03:06:31","guid":{"rendered":"https:\/\/nsfocusglobal.com\/?p=32357"},"modified":"2026-04-17T18:07:35","modified_gmt":"2026-04-17T18:07:35","slug":"nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment","status":"publish","type":"post","link":"https:\/\/nsfocusglobal.com\/pt-br\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/","title":{"rendered":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment"},"content":{"rendered":"<!DOCTYPE html PUBLIC \"-\/\/W3C\/\/DTD HTML 4.0 Transitional\/\/EN\" \"http:\/\/www.w3.org\/TR\/REC-html40\/loose.dtd\">\n<html><body><p>Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716a-1.png\"><img decoding=\"async\" src=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716a-1-1024x561.png\" alt=\"Red circular no entry sign with a white horizontal bar.\" class=\"wp-image-32362\"><\/a><\/figure>\n<\/div>\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b.png\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"482\" src=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-1024x482.png\" alt=\"Red circular no entry sign with a white horizontal bar.\" class=\"wp-image-32360\" srcset=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-1024x482.png 1024w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-300x141.png 300w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-768x362.png 768w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-1536x724.png 1536w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-2048x965.png 2048w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-600x283.png 600w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716b-200x94.png 200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div>\n\n\n<p>Specifically, adversarial attacks are categorized into the following seven types:<\/p>\n\n\n\n<p><strong>01<\/strong> <strong>Model Hallucination<\/strong><\/p>\n\n\n\n<p>Model hallucination occurs when a LLM generates content that appears plausible but is incorrect, fabricated, or irrelevant to the input, this includes attack types such as factual hallucination and faithful hallucination.<\/p>\n\n\n\n<p><strong>02<\/strong> <strong>Meta-Prompt Leakage<\/strong><\/p>\n\n\n\n<p>Meta-prompt leakage refers to situations where a LLLM inadvertently exposes its internally preset system-level prompts while responding to user requests, revealing underlying instructions, rules, or preferences that should remain hidden. This includes attack types such as keyword positioning leakage and assumed scenario leakage.<\/p>\n\n\n\n<p><strong>03<\/strong> <strong>Model Jailbreak Attack<\/strong><\/p>\n\n\n\n<p>Model jailbreak attack refers to bypassing the security restrictions of LLM systems through specific technical means, causing them to violate preset ethical guidelines, content review rules or terms of use, thereby generating otherwise prohibited output, such as violence, illegality, privacy leakage and other content. Attack types include DAN, assumed scenario jailbreak, assumed role jailbreak and adversarial suffix attacks.<\/p>\n\n\n\n<p><strong>04<\/strong> <strong>Role Escape Attack<\/strong><\/p>\n\n\n\n<p>Role escape attack refers to users inducing LLMs to deviate from their preset roles or behavioral norms through specific means, so that they break through the security boundaries, ethical limits or functional constraints set by developers and perform operations that should be prohibited. Such attacks usually target role-playing AI (such as customer service assistants, audit robots, etc.) with the aim of making the model &acirc;&euro;&oelig;forget&acirc;&euro; its duties and act according to the attacker&acirc;&euro;&trade;s intentions. Attack types include assumed role escape, assumed scenario escape, role escape by ignoring previous instructions, and prompt goal hijacking.<\/p>\n\n\n\n<p><strong>05<\/strong> <strong>Application Vulnerability Attack<\/strong><\/p>\n\n\n\n<p>LLM application vulnerability attack refers to a type of attack method that exploits the design defects, logical vulnerabilities or integration problems of actual application systems built based on large language models to carry out malicious behavior. This attack not only targets the model itself, but also involves weak links in the entire application ecosystem, which may lead to serious consequences such as data leakage, service abuse, and permission bypass. Attack methods include code execution injection, XSS session hijacking, and adversarial encoding attacks.<\/p>\n\n\n\n<p><strong>06<\/strong> <strong>Model Function Abuse<\/strong><\/p>\n\n\n\n<p>Model function abuse refers to attackers using a LLM&acirc;&euro;&trade;s capabilities in unintended ways to generate harmful content, bypass restrictions, or perform malicious actions. This type of abuse does not directly attack the model itself but leverages its legitimate functions for illegal or unethical purposes. Examples include phishing email generation, malicious code generation and more.<\/p>\n\n\n\n<p><strong>07<\/strong> <strong>Model Inversion Attack<\/strong><\/p>\n\n\n\n<p>Model inversion attack is a privacy attack method. The attacker repeatedly queries the target model and analyzes its output to reversely infer the training data or input sensitive information of the model. This type of attack aims to &acirc;&euro;&oelig;spy&acirc;&euro; on the original data memorized by the model, which may lead to personal privacy leakage or commercial secret exposure. Attack types include training data leakage and model anomalies and more.<\/p>\n\n\n\n<p>NSFOCUS AI Scan now supports adversarial attacks assessment covering all attack types mentioned above.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><a href=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c.png\"><img decoding=\"async\" width=\"1024\" height=\"488\" src=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-1024x488.png\" alt=\"Red circular no entry sign with a white horizontal bar.\" class=\"wp-image-32364\" srcset=\"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-1024x488.png 1024w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-300x143.png 300w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-768x366.png 768w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-1536x732.png 1536w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-600x286.png 600w, https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2025\/07\/0716c-200x95.png 200w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div><\/body><\/html>\n","protected":false},"excerpt":{"rendered":"<p>Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to produce incorrect predictions or behaviors. In this regard, AI-Scan provides LLM adversarial defense capability assessment, allowing users to select an adversarial attack assessment template for one-click task assignment and generate an adversarial defense capability assessment report. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":25321,"comment_status":"open","ping_status":"open","sticky":false,"template":"post-templates\/single-layout-8.php","format":"standard","meta":{"_acf_changed":false,"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","footnotes":""},"categories":[3],"tags":[914,845,862],"class_list":["post-33300","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","tag-ai-security","tag-ai-scan","tag-llm-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS<\/title>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"pt_BR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS\" \/>\n<meta property=\"og:description\" content=\"Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to\" \/>\n<meta property=\"og:url\" content=\"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/\" \/>\n<meta property=\"og:site_name\" content=\"NSFOCUS\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-16T03:06:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-17T18:07:35+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS\" \/>\n<meta name=\"twitter:description\" content=\"Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. tempo de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#\\\/schema\\\/person\\\/fd9ab61c9c77a81bbd870f725cc0c61d\"},\"headline\":\"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment\",\"datePublished\":\"2025-07-16T03:06:31+00:00\",\"dateModified\":\"2026-04-17T18:07:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/\"},\"wordCount\":557,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#primaryimage\"},\"thumbnailUrl\":\"\",\"keywords\":[\"AI security\",\"AI-scan\",\"LLM\"],\"articleSection\":[\"Blog\"],\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/\",\"url\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/\",\"name\":\"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#primaryimage\"},\"thumbnailUrl\":\"\",\"datePublished\":\"2025-07-16T03:06:31+00:00\",\"dateModified\":\"2026-04-17T18:07:35+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#breadcrumb\"},\"inLanguage\":\"pt-BR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#primaryimage\",\"url\":\"\",\"contentUrl\":\"\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/nsfocusglobal.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#website\",\"url\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/\",\"name\":\"NSFOCUS\",\"description\":\"Security Made Smart and Simple\",\"publisher\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-BR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#organization\",\"name\":\"NSFOCUS\",\"url\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/nsfocusglobal.com\\\/wp-content\\\/uploads\\\/2024\\\/08\\\/logo-ns.png\",\"contentUrl\":\"https:\\\/\\\/nsfocusglobal.com\\\/wp-content\\\/uploads\\\/2024\\\/08\\\/logo-ns.png\",\"width\":248,\"height\":36,\"caption\":\"NSFOCUS\"},\"image\":{\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/#\\\/schema\\\/person\\\/fd9ab61c9c77a81bbd870f725cc0c61d\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-BR\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\\\/\\\/nsfocusglobal.com\"],\"url\":\"https:\\\/\\\/nsfocusglobal.com\\\/pt-br\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"pt_BR","og_type":"article","og_title":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS","og_description":"Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to","og_url":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/","og_site_name":"NSFOCUS","article_published_time":"2025-07-16T03:06:31+00:00","article_modified_time":"2026-04-17T18:07:35+00:00","author":"admin","twitter_card":"summary_large_image","twitter_title":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS","twitter_description":"Large language model (LLM) adversarial attacks refer to techniques that deceive LLMs through carefully-designed input samples (adversarial samples) to","twitter_misc":{"Escrito por":"admin","Est. tempo de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#article","isPartOf":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/"},"author":{"name":"admin","@id":"https:\/\/nsfocusglobal.com\/pt-br\/#\/schema\/person\/fd9ab61c9c77a81bbd870f725cc0c61d"},"headline":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment","datePublished":"2025-07-16T03:06:31+00:00","dateModified":"2026-04-17T18:07:35+00:00","mainEntityOfPage":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/"},"wordCount":557,"commentCount":0,"publisher":{"@id":"https:\/\/nsfocusglobal.com\/pt-br\/#organization"},"image":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#primaryimage"},"thumbnailUrl":"","keywords":["AI security","AI-scan","LLM"],"articleSection":["Blog"],"inLanguage":"pt-BR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/","url":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/","name":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment - NSFOCUS","isPartOf":{"@id":"https:\/\/nsfocusglobal.com\/pt-br\/#website"},"primaryImageOfPage":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#primaryimage"},"image":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#primaryimage"},"thumbnailUrl":"","datePublished":"2025-07-16T03:06:31+00:00","dateModified":"2026-04-17T18:07:35+00:00","breadcrumb":{"@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#breadcrumb"},"inLanguage":"pt-BR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/"]}]},{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#primaryimage","url":"","contentUrl":""},{"@type":"BreadcrumbList","@id":"https:\/\/nsfocusglobal.com\/nsfocus-ai-scan-typical-capabilities-large-language-model-adversarial-defense-capability-assessment\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/nsfocusglobal.com\/"},{"@type":"ListItem","position":2,"name":"NSFOCUS AI-Scan Typical Capabilities: Large Language Model Adversarial Defense Capability Assessment"}]},{"@type":"WebSite","@id":"https:\/\/nsfocusglobal.com\/pt-br\/#website","url":"https:\/\/nsfocusglobal.com\/pt-br\/","name":"NSFOCUS","description":"Security Made Smart and Simple","publisher":{"@id":"https:\/\/nsfocusglobal.com\/pt-br\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/nsfocusglobal.com\/pt-br\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-BR"},{"@type":"Organization","@id":"https:\/\/nsfocusglobal.com\/pt-br\/#organization","name":"NSFOCUS","url":"https:\/\/nsfocusglobal.com\/pt-br\/","logo":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/nsfocusglobal.com\/pt-br\/#\/schema\/logo\/image\/","url":"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2024\/08\/logo-ns.png","contentUrl":"https:\/\/nsfocusglobal.com\/wp-content\/uploads\/2024\/08\/logo-ns.png","width":248,"height":36,"caption":"NSFOCUS"},"image":{"@id":"https:\/\/nsfocusglobal.com\/pt-br\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/nsfocusglobal.com\/pt-br\/#\/schema\/person\/fd9ab61c9c77a81bbd870f725cc0c61d","name":"admin","image":{"@type":"ImageObject","inLanguage":"pt-BR","@id":"https:\/\/secure.gravatar.com\/avatar\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d3dc987908fc59791d261b1006d84eb931d15287261476b9384e690ed0c568de?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/nsfocusglobal.com"],"url":"https:\/\/nsfocusglobal.com\/pt-br\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/posts\/33300","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/comments?post=33300"}],"version-history":[{"count":1,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/posts\/33300\/revisions"}],"predecessor-version":[{"id":35541,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/posts\/33300\/revisions\/35541"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/"}],"wp:attachment":[{"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/media?parent=33300"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/categories?post=33300"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nsfocusglobal.com\/pt-br\/wp-json\/wp\/v2\/tags?post=33300"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}