Company Profile
ZeroPath is an AI-native application security startup founded in 2024, and its core products also use the eponymous brand ZeroPath. The company focuses on using AI to automatically discover, verify and fix code vulnerabilities, trying to break through the limitations of traditional SAST, SCA, Secrets scanning and IaC scanning that are fighting each other and their results are separated, and integrate application security analysis, vulnerability verification and repair suggestions into a unified platform. Its external narrative focuses on “making safety conclusions more verifiable and repairs more executable”, emphasizing not only identifying risks, but also transforming the results into repair actions that can be directly reviewed and implemented by the development team as much as possible. As an entrepreneurial project of Y Combinator’s Summer 2024 batch (S24), ZeroPath also reflects the current trend of application security evolving towards AI-native, low noise and engineering closed loop.
ZeroPath was co-founded by Dean Valentine (CEO), Nathan Hrncirik (CIO), Raphael Karger (CTO) and Etienne Lunetta (COO). The photos of the four founders are shown in Figure 1. Public information shows that Dean Valentine is the company’s current most important external representative; The founding team has a background in continuous entrepreneurship, Tesla Red Team, Google Security Engineering, etc. This to some extent explains why ZeroPath focuses its products on complex business logic vulnerabilities, vulnerability exploitation condition verification, and automated repair promotion, which are closer to real enterprise scenarios. In 2026, ZeroPath was selected as a finalist in the RSAC Innovation Sandbox Competition, further enhancing its focus on emerging application security.
Background
In AppSec (Application Security) in the enterprise, this phenomenon is not uncommon: more and more tools are bought, but there are more and more alarms, but the repair speed is slower. This is often not just because the number of vulnerabilities increases, but because the judgment and disposal costs faced by security teams are also rising simultaneously.
On the one hand, there are more and more tools such as SAST, SCA, Secrets scanning, and IaC scanning, and alarms from different sources continue to converge; On the other hand, the truly high-risk problems are manifested as complex business logic vulnerabilities such as lack of access control, authentication omissions, and chain trigger conditions [1]; At the same time, the acceleration of code generation and development driven by GenAI is also making code submission faster, changes more fragmented, and potentially more vulnerable. After the three factors are superimposed, what companies often see is not more tools and faster risk convergence, but more and more alarms, more and more difficult judgments, and slower repair rhythms.
Alarm noise is not a “minor problem” and will directly delay the pace of vulnerability repair
A typical limitation of traditional tools is that they are good at finding “whether there is a problem with the code”, but not good at answering “will the problematic code be exploited?” Therefore, a large number of alarms are often only theoretically valid but difficult to reproduce in specific business systems. Take PM2, a commonly used process management and application deployment tool in the Node.js ecosystem, as an example. It is often used for service keep-alive, log management and multi-process operation. A regular expression denial of service vulnerability has been disclosed [28]. Traditional tools generally issue an alarm when the vulnerability code exists, but they do not understand how the system actually uses PM2, whether the component is called, whether the vulnerability-related interface is used, and whether the purification function is used.
There are three common scenarios:
1. Although the system has introduced risky components, the business code does not call the vulnerability code. At this time, the tool alarms but the vulnerability is dead code in the code and poses no threat;
2. The relevant path is indeed reachable, but the input has been purified or restricted. For example, the input string of the business is fixed and cannot meet the regular expression mode that causes the server to crash, but it is difficult to truly trigger;
3. The business code directly calls the relevant functions and lacks effective protection. At this time, the vulnerability is closer to the real exploitable state. The problem is that traditional tools are often more likely to stay at the “component has vulnerabilities” level.
At this time, security will become a high-intensity manual analysis process: retrieve each alarm from the massive output to fill in the context-is this route reachable on the external network, or has it been authenticated here, or is this parameter controllable by the user? When the project scale is small, this kind of work can still rely on personnel experience and manual input to maintain operation; but as the system complexity and number of alarms continue to increase, it is often difficult to maintain it by relying solely on manual judgment. An important reason why ZeroPath has attracted attention this time is that it attempts to respond to this common problem in the industry: the bottleneck of application security is often not undetectable, but cannot be handled [1], which is also the scenario most likely to generate practical value in the AI era.
It is difficult to fix business logic vulnerabilities because they do not present typical vulnerability forms
Unlike more typical vulnerabilities such as injection, deserialization, and dangerous function misuse, the difficulty of business logic vulnerabilities often lies not in what is written wrong in the code, but in what constraints are missing from the system. Such problems are usually not directly manifested as explicit unsafe calls, but hidden in logical links such as access control, business state transition or resource ownership verification. Therefore, even if there are no obvious anomalies on the surface of the code, the system may still expose high-risk defects at the business semantic level.
Taking the example of querying the database based on the order number and returning the order details, there is an interface: GET /api/orders/{id}. On the surface, there is no obvious injection risk or explicit dangerous operation in this interface, and SAST may not alarm. However, the key to determining whether an interface is secure often lies in whether there is a necessary business constraint: whether the current user has access to the order. If the system lacks the verification that “the order must belong to the current user”, an attacker may be able to access other people’s order information through enumeration ids, which is a typical Insecure Direct Object Reference (IDOR) or a broader access control flaw, as shown in Figure 4. The reason why it is difficult to detect is that the tool not only needs to determine whether the id comes from user-controllable input, but also needs to further understand the authorization semantics in the system, that is, whether the rule of “only access your own orders” has been correctly implemented in the specific code path, and such constraints are often not uniformly implemented in different systems.
The fundamental reason why business logic vulnerabilities are difficult to fix is that their judgment process relies heavily on understanding the specific business context rather than a single grammatical feature or dangerous pattern matching. The OWASP Top10 2021 lists Broken Access Control as the top risk category[15], highlighting the widespread and serious nature of access control policies that are not properly implemented and can easily infiltrate daily feature development.
The widespread use of code generation (GenAI) has revealed more vulnerabilities
Another background is the change in development rhythm. Veracode mentioned in its interpretation of the 2025 GenAI Code Security Report [13] that they tested the code security performance of 100+ models on multilingual tasks and concluded that AI-generated code is often unsafe, and risks are likely to have entered the code stack. The large-scale use of GenAI has accelerated the speed of code submission. At the same time, the weak security awareness of developers has led to the passive postponement of security work, resulting in a large backlog of security tasks in the hands of security personnel.
If this judgment is understood in the actual development scenario, its meaning will be more specific: when output is faster, submission is more fragmented, and review pressure is greater, problems such as missing verification and unsafe default configuration are more likely to be ignored in rapid iteration.
ZeroPath: Integrate and break through the traditional “four-piece set” of code security
Most enterprise application security teams tend to purchase SAST, SCA, Secrets scanning, and IaC scanning products separately, but the output results of these tools are separated from each other and difficult to integrate uniformly. However, once it is necessary to further confirm whether a vulnerability is truly exploitable, the team often still needs to rely on manual integration of multiple reports with business context.
ZeroPath’s core proposition is that it no longer allows users to stitch together the test results of multiple sets of tools afterwards, but directly provides a unified view of application security analysis. Its official website can be summarized in one sentence: One Scanner. All of AppSec[26]. The introduction of the ZeroPath shortlisted companies officially released by RSAC also specifically stated: using an artificial intelligence native engine to replace the traditional SAST/SCA/Secrets/IaC combination, the goal is to capture more complex business logic problems and cascadable vulnerability chains [1].
SAST: From “discovering dangerous points” to “connecting complete paths”
The typical mode of traditional SAST is to give an alarm after identifying potential sinks (such as dangerous functions or sensitive calls), while the input source, verification logic and authentication conditions still need to be further traced by analysts. This approach itself is not problematic, but it often brings a lot of alarms in the real code base and significantly increases the cost of manual interpretation.
ZeroPath’s emphasis on the SAST page is actually closer to the step of engineering judgment: it wants to do tracking “from input to sensitive points” and regard business logic and authentication defects as key objects (missing authentication, insecure direct object references, authorization bypass paths, payment competition, etc.) [6]. It can be understood as: ZeroPath is not satisfied with being just a “grammar checker”, but emphasizes the analysis of the complete business execution path-to see where a request comes in, what verification it goes through, and where it ends up.
Taking the unsafe direct object reference as an example, an interface receives orderId and then returns the order by checking the library. It is easy for SAST to indicate risks near the section “Sensitive data returned by database search”, but what really determines whether it is a vulnerability is whether there is a verification in the middle that “the order belongs to the current user”.
If the tool can string together the path “entry parameters come from users → no user/tenant boundary verification → direct reading of order data”, analysts can more quickly determine whether the problem is close to a real exploitable vulnerability. On the other hand, if it only gives a sentence “possible unauthorized access”, it still needs to rely on manual further review of the code path.
SCA: From “component vulnerabilities” to “whether it can be triggered”
The typical problem that SCA faces for a long time is that the CVE list is often large in number, and there are many high-risk items among them, but the team often finds it difficult to judge in time “whether we use it in this link.” So the team either rushed to repair without sufficient judgment, or postponed it for a long time and gradually formed a safe debt.
The direction of improvement in the software supply chain over the past few years is called Reachability Analysis: using dependency graphs and call paths to filter out “impossible vulnerability code” [16]. In recent years, some work [27] has produced Exploitability Analysis: From vulnerabilities to external input controllable code, there is not only a code call chain, but the controllable input/controllable conditions on the call chain also need to meet the constraints of vulnerability exploitability. Such vulnerabilities are worth investing in analysis. ZeroPath also gives a similar caliber in its solution page: through “AI Reachability Analysis”, a large number of labeled CVEs are converged to a small number of “more likely to be truly exploitable” problems [7]. ZeroPath’s expression is closer to the second one-it wants to push the output of SCA from “this component has a vulnerability” to “is there a risk of exploitation?”
This route is not unique to ZeroPath, but a more obvious evolutionary direction in the SCA field in the past two years [32]. Public information shows that Semgrep has further divided the reachability analysis into three depths: dependency level, function level and data flow level in 2025 [29]; Snyk also clearly defines reachability analysis as “whether the application calls code elements related to vulnerabilities” in its official document for 2026 [30]; Endor Labs goes a step further and distinguishes between Exploitable, Potentially Exploitable and False Positives directly in the document. Positives)[31]. In this context, ZeroPath emphasizes not only “discovering affected components”, but also attempts to further advance the judgment to the level of “whether the current system has practical utilization conditions” with the help of large models. In contrast, Endor Labs’ public emphasis is still mainly based on static analysis methods. This difference has strong analytical value because it corresponds to the root cause of the alarm backlog problem: it is difficult for the team to devote enough energy to understand and audit each constraint of the hundreds of code call paths for each vulnerability one by one.
Secrets and IaC: From “independent scan results” to “contextual evidence fusion”
Deploying Secrets scanning and IaC scanning as independent capabilities is not a new practice in itself. The more critical question is whether it can be understood in the same analytical framework as code paths, external exposure surfaces and dependencies.
In the company’s profile in the RSAC Innovation Sandbox, it writes Secrets and IaC directly into the scope of “alternative stack” [1]. ZeroPath’s solution page further emphasizes that it will use contextual methods to reduce false positives, such as smarter filtering of Secrets, identification and “reasonable neglect” of IaC risks [7]. These statements are common in product promotions, but if you put them into the logic of “integrated engine”, their meaning will be more specific:
IaC tells you “where it is exposed to the outside world”, code tells you “what can be done with the exposed entrance”; Secrets tells you “whether sensitive credentials are exposed”, and code and permission links tell you “which accessible resources these credentials may correspond to”. When this information can be used as evidence against each other, the so-called chain vulnerability is no longer just a conceptual description, but a path that can be verified [1][7].
From discovering problems to promoting fixes: the new competitive direction of application security tools
The basic working method of traditional AppSec has long been based on the chain of “tool discovery-manual distribution and repair”: security tools first discover vulnerabilities, and the security team then screens and prioritizes the results. After that, the problem is transferred to the R&D team for processing through work orders, alarm panels or code hosting platforms. GitHub’s code scanning system, traditional code scanning platforms, and developer security workflows in recent years mostly still follow this model. The problem is that although this process is relatively mature, it often forms obvious “handover costs” in real organizations: the security team is responsible for discovery and interpretation, and the R&D team is responsible for understanding and repair. Once there is a lack of context or priority judgment in the middle, alarms are prone to enter a backlog state.
The new change is that more and more tools are trying to push AppSec’s output further from “finding problems” to “pushing fixes.” ZeroPath’s public proposition is this: it does not want to remain a scanner, but rather wants to push vulnerability detection, verification and fix suggestions further into the development process, translating them as much as possible into changes that can be directly reviewed by the R&D team[9]; its SAST page explicitly mentions that it will give readable fix suggestions and generate code merge requests that can be directly reviewed and merged[6]. In line with this, Claude Code Security, launched by Anthropic in 2026, will also discover vulnerabilities and provide targeted patch recommendations for manual review as a core selling point, but its official statement emphasizes synergy with existing tools rather than directly replacing the entire AppSec stack. Together, the two show that the focus of competition in application security tools is shifting from who can report more problems to who can more effectively shorten the path from discovery to fix.
From “finding risks” to “giving repair actions”: Why is automatic repair difficult?
It is much more difficult to generate a truly actionable fix proposal than to discover vulnerabilities. The difficulties include at least three aspects: First of all, vulnerabilities cannot always be solved by partial changes, especially for business logic and access control problems. Fixing often involves interface semantics, permission boundaries and process constraints; Secondly, patches must avoid introducing compatibility issues and new defects as much as possible, which means that the fix suggestions must not only “look reasonable” but also be able to withstand compilation, testing and code review; Finally, once the fix enters the PR, merger and release process, it is no longer just a technical generation issue, but will touch on organizational governance, authority control and responsibility allocation. When introducing Copilot Autofix, GitHub emphasized that its goal is to help developers fix code scanning alarms faster, but the relevant documents also clearly pointed out that this function cannot generate repair suggestions for all alarms, and the generated results require manual review and have limitations.
From a research perspective, automatic program repair (APR) is not a new problem, but it is still subject to many constraints such as test completeness, verification cost, patch generalization ability and complex error repair ability [25]. In current product practice, this means that “giving a PR” is far more difficult than “giving an alarm.” ZeroPath emphasizes its direction of generating mergeable PR in public materials, but there is still limited public information on how patches are verified, how to reduce regression risk, and what range can be covered under multilingual and complex frameworks [4][6][21]. In contrast, Claude Code Security’s official statement is more cautious: it emphasizes the generation of patches for target code for manual review and positions them as security review and fix recommendation capabilities in help documents, rather than automatically replacing developers to complete merge decisions. This difference also shows that although the industry is moving its repair capabilities forward, it is still generally conservative about the boundaries of “automatic repair”.
From “Security Team Bill of Lading” to “Tool Participation in Repair”: The Industry is Moving Repair Capabilities Forward
If you look back at the previous AppSec workflow, security tools usually take on the responsibility of “finding problems and informing developers”, and then the security team needs to do a lot of screening, interpretation and distribution work. The R&D team will not really face work orders and repair tasks until several weeks later. This is also one of the reasons why traditional AppSec is prone to task backlogs: tools are responsible for discovery, security teams are responsible for interpretation, and development teams are responsible for repair. The longer the chain, the less efficient the processing. Related industry data also shows that traditional AppSec projects have long been under pressure from fragmented tools, false positives and to-do backlogs.
The common direction of the current batch of new tools is to shorten this link as much as possible. GitHub uses find and fix [10] to summarize the goal of automated repair of code scan results, that is, to convert findings directly into repair suggestions as much as possible [11]; Anthropic defines Claude Code Security as helping teams discover and fix issues that traditional methods may miss, and complementing existing security workflows; ZeroPath goes a step further and packages “detection, verification, repair suggestions and PR generation” as part of the external narrative. The differences among the three are as follows: GitHub is more like adding automated repair to the existing code scanning system, Claude Code Security emphasizes using reasoning capabilities to supplement existing tools and promote automated repair disposal, while ZeroPath’s narrative is closer to “coordinating discovery, verification and repair with a unified engine” [8][9]. But no matter how the path changes, the direction of competition is clear: whoever can generate less noise, give a high-confidence explanation faster, and enter the actual repair process for developers more smoothly will be more likely to have an advantage in the new round of AppSec tool evolution.
Technical Comments
There have always been strong doubts in the industry about whether AI can truly understand code and identify business logic vulnerabilities: LLM does not naturally have stable and verifiable security judgment capabilities, so whether it can undertake security audit tasks has always been a core issue of concern to the industry.
Based on its public description, the author believes that the reason why ZeroPath entered the RSAC finals is that it is closer to the route of “LLM auxiliary specifications/semantics + program analysis for reviewable reasoning”: LLM is responsible for filling in semantics, context and rules; the real warehouse-wide and reviewable reasoning is handed over to program analysis [4][10][11][17].
Lay the foundation first: First build a code representation that can perform path reasoning
Static analysis has been adopted for a long time not mainly because it is “intelligent”, but because its conclusions are stable and the chain of evidence can be traced back.
For static analysis to answer the question “Can this input go all the way to sensitive operations?”, it is usually necessary to first transform the code into a structure more suitable for reasoning. From an analytical perspective, its core is not to read the code file by file, but to build a unified graph structure representation-nodes are statements/variables/function calls, and edges are the relationship between control flow, data flow, and syntax structure. Many tools call this type of structure Code Property Graph (CPG) [19].
ZeroPath mentioned in public materials that its process starts with an abstract syntax tree (AST), then builds the so-called “enriched graph” (original text, without explaining how the graph is enriched), and then does discovery, verification, and patch generation [4]. This statement itself reveals a relatively clear technical orientation: at least it recognizes that “reading text only by large models” is not enough, and the code must be turned into a computable structure before we can talk about “links”, “paths” and “availability”.
LLM is a role that assists in generating analysis rules, rather than directly undertaking full security reasoning
In the security code analysis scenario, it is often difficult to meet the requirements of stability, verifiability and whole warehouse level analysis at the same time by simply relying on LLM to directly read the source code and give vulnerability conclusions. The reason is that real vulnerability detection is not just a problem of understanding local code, but also involves systematic reasoning tasks such as cross-file call chains, data flow propagation, framework conventions, purification function identification, and source/sink specification modeling. For this type of problem, the advantage of program analysis is that it can perform stable and scalable path search and constraint propagation based on structured intermediate representations, while its limitation is that it is highly sensitive to framework semantics, business context and rule maintenance: The input context of large models is limited, and all codes cannot be entered. Once the source, sink or sanitizer (purification point) is incomplete, the detection capability will be obviously limited [17]. The code attribute diagram is a unified program representation designed for such code query and analysis. Its goal is to organize the syntax structure, control flow and data flow into the same graph model to support systematizable code reasoning.
A representative work in recent years, IRIS [17][18], illustrates a more feasible integration path. IRIS uses large language models to generate rule information in taint analysis and supplement relevant contextual analysis, while reasoning for the entire code repository is still completed by static analysis. In other words, LLM does not play the role of “replacing program analysis with direct audit code” in this framework, but rather supplements static analysis with semantic and rule information that is difficult to fully maintain with traditional manual specifications. Such a sub-tool has two meanings: on the one hand, it takes advantage of LLM’s advantages in code semantic understanding, framework knowledge transfer and context completion; on the other hand, it retains the deterministic capabilities of program analysis in path traceability, result verifiability and whole warehouse reasoning. Combined with the abstract syntax tree, enriched graph, vulnerability discovery, verification and patch generation processes mentioned in ZeroPath’s public materials, its technical route is likely to be closer to this type of hybrid model of “LLM complement semantics and specifications, program analysis responsible for reviewable reasoning”, rather than relying solely on large models to directly make black-box security judgments on the entire warehouse.
Therefore, ZeroPath can publicly emphasize “deep understanding of code”, “less noise” and “more like an exploitable path”. Based on the above research, the editor believes that based on the enriched graph, ZeroPath has built a unified reasoning framework around code semantic understanding, false positive convergence and exploitable path analysis, and incorporated the reachability analysis/exploitability analysis in SCA into it [5][7]. The judgment criteria for dependent vulnerabilities will be further shifted from “whether the component contains known vulnerabilities” to “whether the current code path can actually reach the relevant vulnerability location”, thereby improving the interpretability and disposal value of risk conclusions.
“Whether you can work stably in PR” is the future direction
The fact that academic research can achieve good results on a specific data set does not mean that the relevant methods have been stably applied to real software warehouses. The actual engineering environment usually contains problems such as multi-language mixing, historical code legacy, complex framework packaging, a large number of dynamic features and insufficient test coverage. These factors will significantly increase the difficulty of vulnerability detection, path verification and patch generation.
From the perspective of industry practice, the reason why GitHub’s automatic repair is based on static analysis bases such as CodeQL and repeatedly emphasizes review and constraint requirements in documents precisely shows that the automatic repair capability cannot rely solely on model generation itself, but must be built on engineering guardrails and verification mechanisms [10][11].
Combined with the existing public information of ZeroPath, its overall direction is consistent with the trend of automated remediation: it not only emphasizes vulnerability discovery, but also path analysis, result verification and patch generation[4][21]. However, the public materials still do not provide sufficiently detailed explanations on the support depth of different languages and frameworks, the processing methods of dynamic characteristics, the detection capabilities in cross-language scenarios, and whether the evidence chain can be consistent and verifiable. Therefore, it is more prudent to judge at this stage that ZeroPath has demonstrated a realistic and attractive technical route, but its maturity and applicability boundaries in complex engineering environments still need further verification by more public information [4][5][7].
Conclusion
ZeroPath was selected as one of the top ten in RSAC 2026 Innovation Sandbox, indicating that its product narrative at least fits the judges’ current focus: reducing noise, identifying business logic vulnerabilities, linking risk paths, and trying to further promote repair actions to the PR level. These points are indeed reasonable at the moment, because many teams have been squeezed by the continuous accumulation of security debt to process space-the more tools, the longer the queue, and few teams can clearly declare that “we have repaired the most dangerous ones first.”
References
[1] RSA Conference LLC. Finalists Announced for RSAC Innovation Sandbox Contest 2026[EB/OL]. (2026-02-10)[2026-03-03]. https://www.rsaconference.com/library/press-release/finalists-announced-for-rsac-innovation-sandbox-contest-2026.
[2] PR Newswire. Finalists Announced for RSAC Innovation Sandbox Contest 2026[EB/OL]. (2026-02-10)[2026-03-03]. https://www.prnewswire.com/news-releases/finalists-announced-for-rsac-innovation-sandbox-contest-2026-302683184.html.
[3] RSAC Conference. Innovation Sandbox[EB/OL]. (n.d.)[2026-03-03]. https://www.rsaconference.com/usa/programs/innovation-sandbox.
[4] ZeroPath. How ZeroPath Works[EB/OL]. (2024-11-01)[2026-03-03]. https://zeropath.com/blog/how-zeropath-works.
[5] ZeroPath Team. Introducing ZeroPath: The Security Platform That Actually Understands Your Code[EB/OL]. (2025-08-12)[2026-03-03]. https://zeropath.com/blog/introducing-zeropath-v1.
[6] ZeroPath. AI-Native SAST – Application Security Testing[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/products/sast.
[7] ZeroPath. AI Application Security (AI AppSec)[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/solutions/ai-appsec.
[8] ZeroPath. llms-full.txt[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/llms-full.txt.
[9] ZeroPath. Trust Center[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/trust-center.
[10] Tempel P, Tooley E. Found means fixed: Introducing code scanning autofix, powered by GitHub Copilot and CodeQL[EB/OL]. (2024-03-20; updated 2025-04-07)[2026-03-03]. https://github.blog/news-insights/product-news/found-means-fixed-introducing-code-scanning-autofix-powered-by-github-copilot-and-codeql/.
[11] GitHub Docs. Responsible use of Copilot Autofix for code scanning[EB/OL]. (n.d.)[2026-03-03]. https://docs.github.com/en/code-security/responsible-use/responsible-use-autofix-code-scanning.
[12] Veracode. October 2025 Update: GenAI Code Security Report[EB/OL]. (2025-10)[2026-03-03]. https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/.
[13] Wessling J. We Asked 100+ AI Models to Write Code. Here’s How Many Failed Security Tests.[EB/OL]. (2025-07-30)[2026-03-03]. https://www.veracode.com/blog/genai-code-security-report/.
[14] Tischler N. 2026 State of Software Security: Risky Debt is Rising, But Your Strategy Starts Here[EB/OL]. (2026-02-24)[2026-03-03]. https://www.veracode.com/blog/2026-state-of-software-security-report-risky-security-debt/.
[15] OWASP. A01:2021-Broken Access Control[EB/OL]. (2021)[2026-03-03]. https://owasp.org/Top10/2021/A01_2021-Broken_Access_Control/.
[16] Wiz Experts Team. What is reachability analysis in cloud security?[EB/OL]. (2025-11-07)[2026-03-03]. https://www.wiz.io/academy/application-security/reachability-analysis-in-cloud-security.
[17] Li Z, Dutta S, Naik M. IRIS: LLM-Assisted Static Analysis for Detecting Security Vulnerabilities[C/OL]//International Conference on Learning Representations (ICLR) 2025. (2025)[2026-03-03]. https://proceedings.iclr.cc/paper_files/paper/2025/hash/582d4e27fa24168f3af1f4582655034b-Abstract-Conference.html.
[18] Li Z, Dutta S, Naik M. LLM-Assisted Static Analysis for Detecting Security Vulnerabilities[EB/OL]. (2024-05-27)[2026-03-03]. https://arxiv.org/abs/2405.17238.
[19] Joern. Code Property Graph Specification[EB/OL]. (n.d.)[2026-03-03]. https://cpg.joern.io/.
[20] GitHub. ZeroPath AI – GitHub Apps[EB/OL]. (n.d.)[2026-03-03]. https://github.com/apps/zeropath-ai.
[21] ZeroPath. Secure AI-Generated Code[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/solutions/secure-ai-generated-code.
[22] ZeroPathAI. zeropath-mcp-server[EB/OL]. (n.d.)[2026-03-03]. https://github.com/ZeroPathAI/zeropath-mcp-server.
[23] Rogers J. Hacking with AI SASTs: An overview of “AI Security Engineers” / “LLM Security Scanners” for Penetration Testers and Security Teams[EB/OL]. (2025-09-18)[2026-03-03]. https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters.
[24] ZeroPath. Quick Start – ZeroPath Documentation[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/docs/quickstart.
[25] Xin Q, Wu H, Reiss S P, et al. Towards practical and useful automated program repair for debugging[J]. arXiv preprint arXiv:2407.08958, 2024.
[26] ZeroPath. ZeroPath – AI-Native SAST & AppSec Platform[EB/OL]. (n.d.)[2026-03-03]. https://zeropath.com/.
[27] Deng P, Zhang L, Meng Y, et al. {ChainFuzz}: Exploiting Upstream Vulnerabilities in {Open-Source} Supply Chains[C]//34th USENIX Security Symposium (USENIX Security 25). 2025: 6199-6218.
[28] Snyk. Regular Expression Denial of Service (ReDoS) Affecting pm2 package, versions <6.0.9 (SNYK-JS-PM2-10335843; CVE-2025-5891)[EB/OL]. (2025-06-11)[2026-03-04]. https://security.snyk.io/vuln/SNYK-JS-PM2-10335843
[29] Semgrep. What You Should Know About Dependency Reachability in SCA [EB/OL]. (2025-12-15)[2026-03-17].
[30] Snyk. Reachability analysis [EB/OL]. (2026-02-20)[2026-03-17].
[31] Endor Labs. Reachability analysis [EB/OL]. (n.d.)[2026-03-17].
[32] BlackDuck. Beyond detection: Understanding vulnerability reachability in SCA [EB/OL]. (2025-06-30)[2026-03-17].
