NSFOCUS AI-Scan for LLM Content Assessment

NSFOCUS AI-Scan for LLM Content Assessment

July 10, 2025 | NSFOCUS

NSFOCUS AI-Scan detects security risks in large language models through a professionally curated and calibrated advanced risk database. It includes LLM content assessment, adversarial safety assessment and supply chain risk detection functions. In this post we will bring brief for content security assessment features.

Create a Task

Step 1: AI-Scan supports over 140 commercial and open-source large language models and any LLM supports OpenAI’s RESTful interface. Users could generate a task by entering API interface, vendor/model name and API Key.

Step 2: Select the corresponding content compliance template. AI-Scan has build-in four content compliance risk detection templates according to GB/T 45654-2025 standard, including generated content security assessment templates (excluding rejection and non-rejection), rejection assessment templates, non-rejection assessment templates and full content compliance templates.

Step 3: Set advanced parameters when necessary: target connection timeout, number of retries and speed limit for sending test prompts.

Report Generation

The assessment report shows answers, rejections, and non-rejections, including detailed pass rates for each subcategory. It also includes details of each test case, including prompt, actual answers, reference answers, judgment basis, and test results.  Manual correction and report download is also available.

Customized Test Prompt Import

Customized Test Prompt is supported so that user can maintain its proprietary test case as templates. When importing objective prompts, standard answers are needed. When subjective prompts, standard answers or assessment criteria are needed for intelligent assessment. JSONL/JSON/CSV/Excel are supported for import.