Models are also assets: AI will be a new arena of attack and defense

Models are also assets: AI will be a new arena of attack and defense

abril 26, 2023 | NSFOCUS

On the afternoon of April 24, 2023, RSA Conference announced the winner of the innovation sandbox contest this year, and HiddenLayer, an AI security vendor, was crowned the Most Innovative Startup 2023. Starting from HiddenLayer, the innovative sandbox champion, this article will further interpret and explore AI security.

Figure 1. HiddenLayer Won the Most Innovative Startup 2023 at RSAC 2023 [1]

Background

The winner HiddenLayer reminded me of Big ID in 2018, which is a very similar scenario. The RSA Conference 2018 opened on April 15, 2018, and one month later, on May 25, the General Data Protection Regulations (GDPR) would come into effect. As the big stick of data compliance would be about to fall, all organizations had an urgent need for data security, and BigID, which focuses on helping customers meet data compliance, became the most popular finalist. BigID also lived up to expectations and took the crown.

This year’s OpenAI’s ChatCPT and GPT-4 based on the big language model attracted the attention of various industries around the world, and artificial intelligence has reached unprecedented expectations. In some pre-competition votes, it was heard that HiddenLayer, who focuses on AI attack and defense, has received the most attention. It is reasonable to win the championship this time.

Table 1 Innovation Sandbox Champions over the Years

AI security will become a new arena

The security industry has always been famous for its many niche segments, including network security, terminal security, application security, cloud computing security, IoT security, etc. This is the first time that the RSAC innovation sandbox contest awarded the champion to the AI security category. This confirms that AI security will become an independent arena. The emergence of this arena indicates that artificial intelligence will be widely used and industrialized.

In the past, with the emergence of new technologies, cybersecurity professionals will consider two issues: the first is the inherent security of AI technology, and the second is how to use this new technology to empower security.

Generally speaking, people tend to first consider the first issue to address the new risks it brings, and then use the new features of the new technology to help the security companies improve their own efficacy. The reason is that direct and new business opportunities are more attractive than cost reduction and efficiency improvement. For example, after the emergence of cloud computing, we must first demonstrate how to apply access control, intrusion detection and other mechanisms to the cloud environment, design and implement virtualization security and cloud-native security solutions, and then consider using the agile and elastic characteristics of cloud computing to reconstruct the existing atomic security capabilities. Other technologies such as blockchain and SDWAN are no exception.

Figure 2. AI History [2]

However, artificial intelligence is an exception. Since the 1960s, there have been three ups and downs, including an outbreak period and a winter of artificial intelligence. But the universal era of artificial intelligence has not arrived. Even if Google’s AI AlphaGo defeats the Go champion, it simply illustrates AI’s ability to outperform humans in specific areas, but success in areas such as chess-like motion, natural language, and image recognition does not solve all of the problems facing humans. Logical reasoning, domain knowledge, and precise decision-making were not what AI was good at. From the perspective of application, many companies made a snap decision to use AI when they see good results that AI created in the field of images and text. The daily work of algorithm engineers is to select models and adjust parameters, and they seldom think about how to explore the principle that artificial intelligence really works. In the end, if the results are not good, they will give it up. As a result, AI applications in many fields are widely used, but the actual effect is modest. As a result, these AI models are of little value.

Of course, the attack and defense of artificial intelligence has always been an academic hotspot. For example, in 2013, a study showed that an image of a panda could be fooled into thinking of a gibbon by adversarial learning[3]. NSFOCUS also funded the AI Deception and Defense Project through the “CCF-NSFOCUS Kun-Peng Scientific Research Foundation” in 2022 and received good results.

Nevertheless, AI attack and defense is mainly applied by Internet giants, and the commercialization of independent products is still in the early stage. Therefore, HiddenLayer claimed that no previous security company focused on model security.

The victory of HiddenLayer will drive the security industry of AI itself. With the outbreak of AI industry in large language models and general fields, there will inevitably be a large number of start-ups launching their own AI security products based on the attack and defense of existing AI models, and this niche segment will take shape.

AI security can also take advantage of the existing attack and defense foundation

Academically, AI attack and defense mainly stand from the perspective of machine learning, using techniques such as adversarial samples and adversarial learning to deceive models or improve their robustness and interpretability. These technologies have little to do with traditional attacks and defenses, and the corresponding AI security products are tightly coupled products embedded in the model, so productization itself is very difficult. However, HiddenLayer has unveiled a new approach. Starting from basic security theories and attack and defense techniques, it explains high-end AI security in a form that existing security technicians can understand.

For example, HiddenLayer defines models and training sets as a type of asset in a company. Since it is an asset, it has vulnerabilities and needs management and security protection. Customers can be impressed by necessity. Second, it proposes MLDR for their protection framework and reuses the concepts of detection and response, so that the protection of the model can be equivalent to EDR, NDR and MDR, and even can be placed in XDR to form the full life cycle protection of AI assets in a company.

Apart from the consistency of concept and framework, compatibility with the traditional attack and defense on specific tactics is the other thing not easy to achieve. For example, AI attack and defense technologies include member inference, data poisoning, model bypassing, model injection, etc. These technologies are different from traditional security attack and defense technologies in principle and technology stack. How can they be understood and accepted by customers? Fortunately, Mitre ATT&CK gave an ATLAS matrix for machine learning [4], which enumerated many technologies and tactics used for AI attacks, and gave the use scenarios and countermeasures of each technology in a relatively standardized manner. HiddenLayer gives ATLAS coverage of its capabilities to prove its comprehensiveness, maturity, and expertise in AI attack and defense.

Figure 3. HiddenLayer’s product and services matrix [5]

From a CISO’s perspective, HiddenLayer can provide a relatively complete solution for enterprise-level AI asset sorting, risk discovery and detection response systems. Its protection concept, architecture and products are compatible with the established security system, and can indeed solve a series of risks such as intellectual property protection and data protection after the AI engine goes live.

Conclusion

With the upsurge of big language models, AI has been recognized by the industry like cloud computing, and universal applications based on AI will be ubiquitous. If attackers want to make to profit, in addition to the traditional network attack vectors, they will also explore the weaknesses of the AI technology. Adversarial learning and adversarial samples will be one of the arsenals in an attacker’s hands. In the future, the models and training sets of artificial intelligence will become the targets of attackers like IT systems and cloud hosts.

That HiddenLayer won the championship shows that traditional companies are urgent and worried about AI technology, and the success of GPT also contributes the most to HiddenLayer’s winning. Hope that HiddenLayer can seize the upsurge of GPT and quickly replicate the successful case to broaden the AI security arena.

For security practitioners, in addition to learning network attack and defense technologies, AI technology and AI adversarial technology may also be one of the foundations of the skill stack.

Many companies have been putting more effort to AI. For example, NSFOCUS Security Labs have been working on Trusted Artificial Intelligence (XAI) to improve the robustness and explainability of models against machine learning attacks. In 2022, the “XAI-based Rule Knowledge Extraction Engine” has become an excellent practice case of trusted AI. We will keep researching for more innovative achievements. If you are interested in this field, you are welcome to join our open-source Trusted Artificial Intelligence Project XAIGen.

References:

[1] https://www.rsaconference.com/usa/programs/innovation-sandbox

[2] https://www.aminer.cn/ai-history

[3] https://www.leiphone.com/category/academic/W4Wm5jfL19ZWbIbp.html

[4] https://atlas.mitre.org/

[5] https://hiddenlayer.com/research/mitre-atlas-at-crossroads-of-cybersecurity-and-artificial-intelligence/