RSAC 2025 Innovation Sandbox | Aurascape: Reconstructing the Intelligent Defense Line of AI Interactive Visibility and Native Security

RSAC 2025 Innovation Sandbox | Aurascape: Reconstructing the Intelligent Defense Line of AI Interactive Visibility and Native Security

April 25, 2025 | NSFOCUS

Company Overview

Aurascape is a cybersecurity startup founded in 2023 and headquartered in Santa Clara, California, USA. The company was co-founded by senior security experts and engineers from world-class technology companies such as Palo Alto Networks, Google, and Amazon. The team has deep expertise in the fields of network security, artificial intelligence, and network infrastructure, and it has built multiple security products with annual revenues of billions of dollars. Aurascape’s mission is to “enable businesses to innovate fearlessly in the age of AI.”. Aurascape is committed to transforming how companies safeguard themselves with the world’s most advanced AI security platform, so that AI-driven innovation can be quickly implemented in a controllable and safe environment.

Figure 1: Co-Founders of Aurascape

In August 2024, Aurascape completed a $12.8 million seed funding round led by Mayfield Fund and Celesta Capital, with participation from StepStone Group, AISpace, and Mark McLaughlin, former Chairman and CEO of Palo Alto Networks. With its deep understanding of AI-native security challenges, Aurascape successfully made into the top 10 finalists of the 2025 RSA Innovation Sandbox contest. According to RSAC, “Aurascape provides the protection that leaders in security and AI need to confidently embrace AI technology.”

Product Background

The rapid adoption of technologies such as generative AI and AI agents is reshaping the way companies collaborate and how information flows at an unprecedented rate. However, at the same time, companies are also facing unprecedented security challenges: How can data be prevented from being leaked? How to identify “shadow AI” applications? Is there any malicious intent hidden in AI-generated content? Traditional security systems fail to provide effective answers to these questions. Aurascape points out that AI applications interact in fundamentally new ways and communications are dynamic, real-time, and autonomous. Traditional protection mechanisms seem powerless under this new paradigm. Therefore, the Aurascape platform is purpose-built for this paradigm to address security issues in the AI era. It emphasizes “complete visibility and controls”, covering the behavioral patterns and data flow paths of thousands of AI applications.

Aurascape believes that every company application will eventually become AI-driven in the future. To this end, Aurascape has built a security platform from an architectural perspective that can adapt to the ever-evolving AI ecosystem. It fully supports the latest forms of AI tools such as generative AI, embedded AI and Agentic AI. The platform is designed to provide strong protection for companies at the forefront of the evolving AI wave.

Figure 2: Aurascape Platform

The platform’s goal is to prevent new threats and protect corporate data with unprecedented accuracy while ensuring that end users’ work efficiency is not disturbed. By actively monitoring AI interaction behaviors, identifying embedded AI components, and finely managing multimodal data sharing, Aurascape is trying to build an AI security governance system that is more intelligent, flexible, and close to real scenarios. Its functional design highlights three core capabilities: visibility, protection and prevention to adapt to the security management challenges brought about by the widespread deployment of generative AI and embedded AI.

Visibility: Aurascape platform has a complete coverage of AI tools within the company, covering thousands of AI tools from generative AI to embedded AI and agentic AI. The platform can automatically discover brand-new AI apps the day they appear. The platform can also perform conversational-level analysis of AI prompts and responses to help companies understand the potential data risks behind each AI interaction. In addition, Aurascape also supports real-time monitoring of “shadow AI” use, unauthorized access, and sensitive data sharing.

Protection: In terms of data protection, the Aurascape platform supports the classification and protection of multimodal content, covering data types such as text, voice, image, video and code.

The platform’s built-in labeling system supports hundreds of semantic dimensions, and it allows companies to achieve higher-precision identification of sensitive content through the “data fingerprinting” function, which not only enhances detection accuracy but also effectively reduces false positives. This mechanism is particularly suitable for the protection of key assets such as intellectual property and source code.

Prevention: For new threats caused by AI-generated content, Aurascape provides a set of protection mechanisms with content understanding as the core to identify phishing, malicious code generation, social engineering, and hidden attack intentions in AI output. The platform dynamically evaluates each AI response through content-level, human-like understanding, thereby blocking potential risks in advance before the AI-generated content enters the business process.

Solution

According to its official website, Aurascape provides five AI security solutions to meet different needs:

1. Discover and monitor AI

This solution is designed to help companies fully understand the actual usage of AI tools within their organizations, especially generative AI, embedded AI and the distribution and behavior of agentic AI. The solution focuses on the “same-day discovery” capability, which can identify the use of AI tools on the day they are launched, and continuously record prompt words, response content and user interaction data, ultimately forming an AI tool asset view covering the entire organization. From a functional perspective, the solution aims to fill the information gap that currently leaves security teams in the dark about where and how AI is being used. It addresses the issue of traditional auditing methods being ineffective in the context of AI.

Figure 3: AI tool asset view

However, the author believes the actual performance of this solution may be influenced by a variety of factors. For instance, AI tools can be integrated through multiple pathways, such as browser extensions, API integrations, or embedded SaaS modules, and it remains to be seen whether the system can achieve high coverage across different environments. Additionally, while the solution offers semantic-level analysis of prompts and AI responses, there is no clear indication of how Aurascape addresses data privacy protection. As such, enterprises may need to carefully evaluate their own needs and compliance risks when adopting it.

Overall, the solution is conceptually forward-looking and represents a crucial component in the AI security governance framework. However, its accuracy in detection and adaptability still need to be proven through practical application.

2. Safeguarding AI use

This solution focuses on addressing commonly seen problems in the process of AI application interaction such as, company data leakage, compliance risks and audit blind spots. The solution uses a built-in multimodal data recognition, detection and classification engine to identify and protect text, voice, images, videos and codes in real time. The platform claims that it does not rely on static rules, but uses contextual understanding and organizational-level semantic learning to improve classification accuracy and effectively reduce false positives.

The core of its mechanism lies in “classification while using”, that is, when users use AI tools, their input and output content are automatically classified and policy evaluated, supporting the coexistence of interception mode and release mode. In addition, the platform also provides real-time prompts and user guidance, trying to reduce interference with the end-user experience while ensuring safety.

Figure 4: Sensitive information identification during a conversation

The author believes that the actual effect of this scheme may depend on the semantic recognition and sensitive information grading capabilities of the classification model in complex scenarios. At present, there are relatively mature sensitive information identification and desensitization technologies on the market, which have performed well in general scenarios. However, for the potential users of the solution, it is usually necessary to customize the protection strategy in combination with specific business scenarios. However, the author has not yet seen Aurascape’s customized solutions for industries. If it only provides general protection capabilities, it may be difficult to cope with the differences in data structure and compliance standards between different industries (such as medical, financial, and energy). In addition, the recognition accuracy of embedded text, watermarks, QR codes and other content in images also determines the practicality boundary of this solution in multimodal scenarios.

Overall, the solution represents a relatively advanced “soft intervention” approach to AI data protection. However, its actual adaptability within the complex environments of large organizations still needs to be supported by more empirical evidence.

3. Copilot readiness

This solution focuses on the readiness of AI Copilot, such as GitHub Copilot, Microsoft Copilot, etc., for secure use in the company. As such tools are widely integrated into code repositories, document systems and collaboration platforms, companies are increasingly concerned about whether their access rights are compliant and whether data is prone to over-sharing.

The solution connects to the company’s internal file repositories to assess whether the access permissions configured for Copilot are appropriate. It identifies potential over-privilege issues by considering data types, sensitivity levels, and user attributes. The platform also monitors whether Copilot is disseminating sensitive content to all employees, external users, or other AI systems. It aims to reduce the risk of data leakage caused by AI automation without interfering with user efficiency.

Figure 5: Copilot readiness assessment

From the perspective of technical design, Aurascape provides a systematic governance path in this regard, especially showing certain maturity in “access readiness audit” and “continuous monitoring + behavior correction”. By identifying permission mismatch and sharing patterns, the solution provides a clear landing point for company AI governance, especially suitable for large organizations preparing to deploy Copilot-like tools on a large scale.

However, the author also believes that the effect of such mechanisms is closely related to the complexity of the internal authority system of companies. At present, many organizations do not standardize the authority system architecture, which may lead to misjudgment or audit blind spots in actual implementation. In addition, how the solution integrates with existing DevSecOps processes and whether it supports fine-grained behavior guidance (such as hierarchical prompts and user feedback mechanisms) is still worth further observation.

4. Coding assistant guardrails

Aurascape has designed this solution to manage the usage risks of AI code development assistants within enterprises, such as CodeWhisperer. The solution aims to strike a balance between “boosting development efficiency” and “protecting core code assets” by implementing policy guidance and behavior analysis to achieve “limited permission and precise protection.”

The platform also supports the identification of unauthorized plugins, IDE integrations, and non-standard access methods that bypass browsers. It can configure different levels of response strategies based on code sensitivity: automatically blocking sharing behavior for highly sensitive code, and intervening in operations through prompts and confirmations for regular projects, thus avoiding blanket bans that could impact development efficiency. Additionally, the system can recognize developer preferences and actual usage behaviors, assisting enterprises in optimizing their subscription structures for paid tools.

Figure 6: Code assistant protection

In terms of design philosophy, the solution offers a rather specific approach to addressing data leakage issues in AI-assisted development scenarios, particularly demonstrating practical value in “dynamic authorization + fine-grained control.” Its continuous discovery capability for “zero-day” tools also gives the strategy a certain degree of forward-looking nature, which is commendable.

However, author thinks the solution’s scalability in cross-team scenarios remains to be seen. Compatibility among different languages, development toolchains, and code repository standards determines the deployment difficulty and the upper limit of policy granularity. Meanwhile, the platform’s capability for “code context understanding” has not been explicitly disclosed. If it relies solely on file paths or naming conventions for judgment, it may face issues with insufficient classification accuracy in complex projects.

5. Frictionless AI security

The goal of this solution is to minimize interference with end users and security teams while ensuring the AI security. Through a series of automated mechanisms, the platform strives to break the binary limitations of traditional security products that are “either excessively blocked or laissez-faire”.

The solution emphasizes comprehensive automation from AI application discovery, risk assessment, to policy execution and incident response. Especially in terms of user interaction, Aurascape will not only inform the reasons for being blocked, but also provide improvement suggestions and temporary complaint mechanisms to make safe behavior more explainable and negotiable. For security administrators, the platform provides automated work orders, refined review and backtracking capabilities to reduce the burden on front-line teams.

Figure 7: AI Risk Response Simplified Automation

From the perspective of functional realization, Aurascape’s series of “soft guidance + policy slow release” designs reflect a relatively advanced human-machine collaboration concept. It uses AI for data classification and risk judgment, supplemented by “data fingerprinting” technology to reduce the false positives, which is expected to improve the accuracy and acceptance of strategies in actual use.

However, the author believes that the key difficulty in achieving a frictionless experience lies in the contextual adaptability of the strategy engine. If the risk level assessment or abnormal behavior identification is not accurate enough, it is easy for this “soft guidance” to become “soft laissez-faire”. In addition, how the platform handles misjudgment, bypass and abuse in the automated process of user complaints is still a common problem in the industry.

Overall, this is a very forward-looking solution in terms of concept and direction. We look forward to Aurascape’s further investment in this field in the future.

Summary

Aurascape has built a relatively comprehensive product system around “AI visibility,” “multimodal data protection,” and “user collaborative governance,” aiming to address the core security challenges that enterprises face amid the widespread adoption of generative AI applications. Its solutions are forward-looking in design philosophy, but further observation is needed in terms of cross-industry adaptation, real-world validation, and model interpretability. AI-native security is gradually emerging as a distinct development path in the field of cybersecurity. How to achieve effective protection without sacrificing user efficiency is a common challenge that all emerging platforms must face. Aurascape, analyzed in this article, is one of the representative practitioners of this trend. It is precisely because of its active exploration in this direction that it has been selected for the 2025 RSAC Innovation Sandbox list. The future performance of this young company is worth attention.

References

[1] Aurascape Inc. (2024) Discover and monitor AI. Available at: https://aurascape.ai/discover-and-monitor-ai/ (Accessed: 21 April 2025).

[2] Aurascape Inc. (2024) Safeguard AI use. Available at: https://aurascape.ai/safeguard-ai-use/ (Accessed: 21 April 2025).

[3] Aurascape Inc. (2024) Copilot readiness. Available at: https://aurascape.ai/copilot-readiness/ (Accessed: 21 April 2025).

[4] Aurascape Inc. (2024) Coding assistant guardrails. Available at: https://aurascape.ai/coding-assistant-guardrails/ (Accessed: 21 April 2025).

[5] Aurascape Inc. (2024) Frictionless AI security. Available at: https://aurascape.ai/frictionless-ai-security/ (Accessed: 21 April 2025).

[6] Aurascape Inc. (2024) Product overview. Available at: https://aurascape.ai/product/ (Accessed: 21 April 2025).

[7] RSA Conference (2025) RSAC 2025 Innovation Sandbox finalists announced. Available at: https://www.rsaconference.com/ (Accessed: 21 April 2025).

[8] Aurascape Inc. (2024) About us. Available at: https://aurascape.ai/about/ (Accessed: 21 April 2025).

[9] Aurascape Inc. (2024) Aurascape AI secures $12.8 million in oversubscribed seed funding. Available at: https://aurascape.ai/aurascape-ai-secures-12-8-million-in-oversubscribed-seed-funding-to-revolutionize-cybersecurity-for-the-ai-era/ (Accessed: 21 April 2025).