Countdown to GovWare 2023 – The Application of Artificial Intelligence (AI) in Cybersecurity

Countdown to GovWare 2023 – The Application of Artificial Intelligence (AI) in Cybersecurity

outubro 12, 2023 | NSFOCUS
NSFOCUS post: AI in cybersecurity

The stage is set, and the countdown has begun. GovWare 2023, a pivotal event in cybersecurity, is just around the corner. From a thorough review of the event agenda, we discerned that many speeches, keynotes and panels will be centered around the application of Artificial Intelligence (AI) in cybersecurity. As we eagerly await GovWare 2023, let’s take a closer look at how AI is making its mark in the realm of cybersecurity. Welcome to our booth (#C06) at the GovWare 2023, where you can engage in deeper discussions with our security experts.

The surge in enterprise digital transformation has drawn widespread attention to and placed great importance on artificial intelligence (AI) technology. AI has made significant strides, finding broad applications in image recognition, natural language processing, and other domains. As cyberspace increasingly becomes the cornerstone of digital economic development, the attack surface in the digital realm continues to expand and evolve, leading to a growing disparity in information between the defenders and attackers in the cybersecurity landscape. As the battle between cyber attackers and defenders intensifies, the utilization of AI technology has become an inevitable trend in the development of network security.

In recent years, a growing number of enterprises and organizations have deepened their integration of AI technology with network security solutions. The application of AI technology in network security has become one of the most direct and critical components for implementing security measures, enhancing the effectiveness of network security defense, and countering advanced persistent threats (APTs) and other sophisticated threats. Confronted with an increasingly heated and prolonged cybersecurity environment, many challenges exist in leveraging AI technology to fortify cybersecurity. ChatGPT has garnered significant attention for its extensive use of advanced AI technology. In traditional network security, AI technology has long been perceived as a “double-edged sword.” Security providers can utilize AI-powered security tools and products to automatically and intelligently handle a multitude of network security attack incidents. Nevertheless, threat actors can also harness the same technology to develop intelligent malicious software programs and launch covert attacks.

This post will explore the application of AI in cybersecurity and the inherent risks of large language models (LLMs) in cybersecurity.

The Application of AI in Cyber Attacks

The development of AI technology has prompted attackers to incorporate machine learning, deep learning, and other methods to enhance the automation, intelligence, and weaponization of network attacks. This has resulted in nearly a 50% increase in both the volume and complexity of attacks, making it more challenging to detect them. AI technology contributes to network attacks in three stages: attack preparation, attack execution, and post-attack activities.

Attack Preparation

Attack preparation aims to gather sensitive information and vulnerabilities related to the attack target, providing the foundation and reference for designing attack strategies. During this phase, attackers may employ machine learning and deep learning technologies for several tasks:

Password cracking: Passwords are a crucial defense against unauthorized access to target networks. To crack passwords successfully, attackers often use AI technologies such as PassGAN and GENPass, which are based on generative adversarial network (GAN) frameworks and neural networks like Long Short-Term Memory (LSTM) and Residual Networks (ResNet). These models automatically learn password distributions and can crack passwords with fewer than six characters in seconds, generating a wide variety of passwords.

Text CAPTCHA cracking: To counter Distributed Denial of Service (DDoS) attacks, many companies use text CAPTCHAs to differentiate between automated programs and humans on their web pages. As CAPTCHAs become more complex and distorted, deep learning has improved the versatility and effectiveness of text CAPTCHA cracking. For instance, researchers use text CAPTCHAs generated by GANs as training data, fine-tuning text recognition models through transfer learning. This approach can crack 33 different text CAPTCHA schemes.

Phishing attacks and spam generation: AI technologies like Markov models, Long Short-Term Memory (LSTM) networks, Autoencoders (AE), Generative Adversarial Networks (GANs), and large-scale language models (LLMs) can effectively generate emails. For large generative models like Transformer, BERT, and GPT, generating phishing and spam emails is cost-effective. These models can generate numerous convincing and relevant spear-phishing emails and spam by providing key trigger words, serving as a launching pad for phishing and denial-of-service attacks.

Automatic network asset discovery: Network asset discovery aims to capture information about a target network’s device and application properties, including operating systems, ports, IP liveliness, and traffic analysis. AI algorithms like Naive Bayes classifiers, Support Vector Machines, Decision Trees, Random Forests, Convolutional Neural Networks, and others have been applied to network asset fingerprinting and signature matching. AI models, trained and optimized with fingerprint data types like network flow and web fingerprinting, can achieve passive and active device recognition.

Vulnerability discovery: Vulnerability discovery seeks to identify and analyze security flaws in software and system lifecycles. AI technology uses knowledge and signatures extracted from vulnerability databases to mine and predict vulnerabilities automatically. Machine learning algorithms like Support Vector Machines, Logistic Regression, Decision Trees, Random Forests, as well as deep learning models like Convolutional Neural Networks, Recurrent Neural Networks, Long Short-Term Memory Networks, Graph Neural Networks, and Generative Adversarial Networks, have been widely researched in the field of vulnerability discovery. Based on software metrics, code attributes, and textual syntax and semantics, these algorithms can automatically uncover vulnerabilities in targets such as smart contracts, IoT, browsers, and binary programs. Combining AI vulnerability discovery with traditional program analysis techniques can further enhance the performance and effectiveness of vulnerability discovery.

Attack Execution

After obtaining critical information about the target and discovering effective vulnerabilities, attackers proceed with exploitation, intrusion, and penetration. AI empowers the execution of attacks in the following ways:

Malware generation: AI technology aids in evading detection, adapting autonomously to the execution environment, and generating malware variants. The Swizzor malware family has millions of binary samples, and this rapid generation of variants leverages the capability of machine learning algorithms to create malicious software variants automatically. These variants maintain similar characteristics to previous versions while increasing the stealthiness of attacks, significantly facilitating the spread of malware.

Exploiting vulnerabilities: Public security vulnerability reports, system vulnerability patches, and pre-discovered vulnerability information serve as data sources for vulnerability exploitation. Natural language processing within AI can extract signatures from these sources and automate the generation of vulnerability exploitation programs, guiding subsequent vulnerability localization and attack path searching processes. For example, SemFuzz uses NLP to extract signatures from CVE reports and vulnerability text in Linux Git logs to generate PoC (Proof of Concept) automatically exploits, while also discovering undisclosed vulnerabilities.

Automatic penetration testing tools: Automated penetration testing tools can complete the entire penetration testing process, including vulnerability information gathering, vulnerability scanning, vulnerability analysis, vulnerability exploitation, post-penetration attacks, and report generation, with a single click. For example, the DeepExploit framework improves the efficiency of penetration testing using the A3C reinforcement learning algorithm built on Metasploit. The Shennina framework uses AI to achieve fully automatic host penetration, with AI responsible for identifying available penetration methods, providing vulnerability exploitation techniques, and executing penetration testing tasks.

Post-Attack Activities

One of the crucial goals of post-attack activities is to obscure and hide the existence of attack behaviors and intentions, enhancing the stealthiness of attacks and reducing the risk of detection.

Traffic emulation: Malicious traffic generated during the attack can expose attack information when detected and analyzed. Therefore, the adaptive emulation of normal traffic is a crucial method for concealing network attacks. Existing artificial intelligence traffic emulation solutions are mostly based on Generative Adversarial Networks and their variants like Wasserstein GAN (WGAN) and WGAN-GP. GANs, by learning the feature distribution and behavior of normal traffic, significantly reduce the effectiveness of existing intrusion detection systems, greatly enhancing the stealthiness of malicious attacks.

Attack intention obfuscation and concealment: Hiding attack intentions is a significant objective for malicious software. Utilizing machine learning and deep learning techniques can effectively enhance the survival and evasion effectiveness of malicious code. Furthermore, static executable files generated by GANs and reinforcement learning frameworks can bypass malicious software detection mechanisms.

The Application of AI in Cyber Defense

Due to attackers leveraging advanced technologies such as big data analysis and automation tools to enhance the efficiency and stealthiness of malicious attacks, there is a growing need for network security defense to break through traditional stages. Researchers are gradually applying AI technology directly or indirectly to improve the efficiency of network security defense, enabling rapid threat detection during practical attack and defense exercises and elevating the automation and intelligence levels of network security defense. Currently, the application of AI technology in network security defense encompasses the following aspects:

Attack Detection

Network Intrusion Detection: To enhance the performance of network intrusion detection, researchers are increasingly applying deep learning networks, such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), as well as adversarial neural networks and reinforcement learning, to network intrusion detection. Although these technologies have shown initial results, effectively implementing AI-based network intrusion detection in large dynamic systems remains challenging.

Spam Detection: AI-based spam detection solutions have been developed for several years. Google reports that its AI-based Gmail spam recognition achieves a rate as high as 99.9%. However, given the enormous volume of spam, the remaining 0.1% still poses problems for users. Therefore, it is worth conducting in-depth research on continuously improving spam detection techniques using new AI technology.

Malware Identification: Most traditional machine learning-based malware detection methods belong to supervised learning and are susceptible to evasion by attackers. Researchers have proposed the DQEAF framework, which utilizes reinforcement learning to train AI agents through continuous interactions with malware samples. This framework leverages reinforcement learning to bypass anti-malware engines, thereby highlighting the weaknesses of supervised learning-based malware detection models in recent years.

Encrypted Traffic Detection: With the widespread use of TLS encryption technology on the internet, detecting malicious traffic directly from massive encrypted traffic without decryption is a subject of great concern in both academia and industry. From a technological trend perspective, the next-generation technology relies on statistical characteristics based on big data, which involves conducting statistics on a large amount of data and extracting signatures that can describe the nature of sample distributions. This process will be realized by AI, which excels in analyzing statistical patterns. However, AI in traffic recognition is still in its early stages and is not yet suitable for final determinations. Instead, it is ideal for data processing and assisting decision-making, and AI-based encrypted traffic detection will be a long-term research topic.

Attack Mitigation

Vulnerability Patching: Machine learning and deep learning algorithms can achieve automated vulnerability patching to address system flaws promptly. Using genetic programming to patch program source code, DeepRepair generates repair solutions based on deep learning. Due to the variety of vulnerabilities and the difficulty of locating vulnerabilities, AI-assisted vulnerability patching still requires human involvement and investigation. Reliable, fully automated vulnerability patching technology necessitates further research investment.

Attack Prediction and Interception: By learning the potential signatures of known vulnerabilities, AI is capable of predicting unknown threats such as zero-day vulnerabilities. Many security vendors currently use deep learning technology to defend against phishing attacks and domain system vulnerabilities. Compared to traditional defense technology, the AI application significantly increases the interception rate of advanced threats and zero-day attacks.

Security Operations

Leveraging AI in cybersecurity facilitates building intelligent and automated network security operation systems. The AISecOps (AI+Security+Operations, Intelligent-Driven Security Operations) platform is the fusion of security operations and AI technology, providing features such as automated analysis of abnormal behavior, adaptive defense strategy generation, alert assessment, and attack analysis. The essence of AISecOps technology can be summarized as follows: AISecOps technology, guided by security operation objectives and based on the integration of people, processes, technology, and data, focuses on critical aspects of network security risk control, attack and defense, including prevention, detection, response, prediction, and recovery. It constructs highly automated and trusted security intelligence models to assist or even replace humans in providing various security operation services. As the level of automation in AISecOps technology increases, it will be necessary to create algorithms, models, systems, and processes that facilitate intelligent collaboration between humans and machines to adapt to advanced automated security operation scenarios continually.

The Application of LLMs in Cybersecurity

With the surge in popularity of ChatGPT, large language models (LLMs) have also garnered significant attention in network security. People are concerned about the impact of AI-generated content tools, especially their implications for network security. Questions arise as to whether these tools can be used for network attacks or protection, and whether they introduce new security vulnerabilities.

Applications of LLMs in Attacks and Defense

Based on existing research and practical cases, LLMs, as powerful AI technology, have applications in both offensive and defensive roles in traditional network security:

  • LLMs Applied in Network Attacks: The emergence of LLMs has lowered the bar for launching various network attacks, such as phishing email generation, credential stuffing attacks, malicious code generation, vulnerability exploitation, and penetration attacks. Even attackers with limited experience in cyber warfare can easily initiate these attacks by conversing with LLMs. LLMs are also used to create fake news, disseminate misinformation, and manipulate public opinion.
  • LLMs Applied in Network Defense: Defenders can leverage LLMs for optimizing tasks like analyzing attack traffic and reverse engineering malicious code. LLMs can also be employed for tasks like automated response orchestration, rule creation, and assisting in security incident handling. Through transfer learning, LLMs can fine-tune themselves to detect various types of network attacks as a single large model, and integrate with traditional security devices, thus revolutionizing traditional security product offerings.
  • LLMs Applied in Network Security Operations: LLMs possess the capability to assist in building network security operations systems. They can perform sub-functions of a security operations system, such as constructing a knowledge base for security operations, serving as AI customer support or command center assistants, guiding security compliance system development, and automatically iterating various regulatory documents. Microsoft has introduced Security Copilot based on ChatGPT to assist in security operations, combined with their security model library containing trillions of network security threats. This offers automated generative AI services for network security, malware protection, and privacy compliance monitoring to businesses and individual users.

Risks of LLMs and Defense Measures

While LLMs are applied in network attack and defense, they also introduce new security risks. Whether used as an offensive weapon or a defensive tool, the introduction of LLMs presents additional risks due to their inherent vulnerabilities and the security threats they face:

  • Risk Due to Lack of Explainability: Current research has yet to provide clear explanations for the internal structure, neurons, and parameters of deep learning models. The lack of transparency and explainability makes it difficult to define the basis for AI decision-making in network attack and defense, and it complicates the task of addressing AI security threats effectively.
  • Risk of Inherent AI Vulnerabilities: The inherent vulnerabilities of AI make it susceptible to various attacks, including but not limited to adversarial examples, backdoor attacks, adversarial reprogramming, and image scaling attacks. These attacks may degrade model performance, influence model decisions, and, in severe cases, cause AI systems to malfunction.
  • Increased Risk of Privacy Leakage: In addition to the risks of data abuse and illegal data collection, AI algorithms face new privacy theft risks, including membership inference attacks, model inversion, and model stealing. These risks increase the likelihood of data and model information being stolen.
  • Vulnerabilities in Algorithm Frameworks and Open Source Libraries: Common open-source deep learning frameworks and their third-party software development kits (SDKs) contain various vulnerabilities. Even mainstream frameworks like TensorFlow have been found to have vulnerabilities in interfaces, learning algorithms, compilation, deployment, and installation. Exploiting these vulnerabilities can lead to threats like escape attacks, denial of service attacks, heap overflows, and more.

To mitigate the above risks, several defensive measures can be taken to enhance the security of AI systems:

  • Input and Output Filtering: Implement filters to block certain types of input or output, such as harmful, offensive, or inappropriate content. Detect anomalies and outliers in input data and discard or transform them to avoid affecting model performance.
  • Transparency and Explainability: Provide a clear understanding of the model’s workings and an auditable decision-making process, enabling users to comprehend and question the model’s outputs.
  • Model Security Testing and Defense: To counter risks like poisoning attacks and backdoor attacks, set up detection mechanisms based on feature compression and local intrinsic dimensions. Additionally, employ strategies like adversarial training and data augmentation to safeguard model and algorithm security.
  • Detect and Remediate Framework Vulnerabilities: Scan potential security vulnerabilities in frameworks and third-party libraries using open-source vulnerability information and techniques like fuzz testing and deep learning.
  • Model Intellectual Property Maintenance: Restrict model access to some extent to alleviate model theft attacks. Model watermarking, fingerprinting, and signature technologies can help identify and verify suspicious models to protect model intellectual property.

LLM technology should be used under continuous monitoring and adjustment to identify and rectify issues promptly. Additionally, implementing these defensive measures should consider ethical, legal, and societal factors to ensure the reliable and sustainable use of new AI technology.

Conclusion

With the integration of AI into cybersecurity, the landscape of cybersecurity is undergoing a comprehensive and profound transformation. Threat actors continually progress towards scaling up, organizing, automating, enhancing the intelligence of attack techniques, and weaponizing their tactics. The question of how to engage in security battles with attackers using AI technology to safeguard cybersecurity has become increasingly central in the arena of cyberspace. Large language model technology will further drive the transformation of network security offense and defense.

Although the application of relevant large language model technology in cybersecurity is not yet mature and faces numerous challenges, those who are the first to find the best synergy between this technology and network security offense and defense will seize the initiative in the game of cybersecurity. Therefore, promoting the continuous practical application of AI in cybersecurity and enhancing network defense capabilities hold significant importance in the continuous maturation of intelligent cybersecurity.

References:

[1] NSFOCUS, Intelligent Foundation: Initiating a New Era of Security Analysis, 2022.

[2] Hitaj B, Gasti P, Ateniese G, et al. Passgan: A deep learning approach for password guessing[C]//Applied Cryptography and Network Security: 17th International Conference, ACNS 2019, Bogota, Colombia, June 5–7, 2019, Proceedings 17. Springer International Publishing, 2019: 217-237.

[3] Liu Y, Xia Z, Yi P, et al. GENPass: A general deep learning model for password guessing with PCFG rules and adversarial generation[C]//2018 IEEE International Conference on Communications (ICC). IEEE, 2018: 1-6.

[4] Ye G, Tang Z, Fang D, et al. Yet another text captcha solver: A generative adversarial network based approach[C]//Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 2018: 332-348.

[5] Kubovič O, Košinár P, Jánošík J. Can artificial intelligence power future malware[J]. Research Desk, 2020.

[6] You W, Zong P, Chen K, et al. Semfuzz: Semantics-based automatic generation of proof-of-concept exploits[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 2139-2154.

[7] Rigaki M, Garcia S. Bringing a GAN to a knife-fight: Adapting malware communication to avoid detection[C]//2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018: 70-75.

[8] Lin Z, Shi Y, Xue Z. Idsgan: Generative adversarial networks for attack generation against intrusion detection[C]//Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, May 16–19, 2022, Proceedings, Part III. Cham: Springer International Publishing, 2022: 79-91.

[9] Lee W H, Noh B N, Kim Y S, et al. Generation of network traffic using wgan-gp and a dft filter for resolving data imbalance[C]//Internet and Distributed Computing Systems: 12th International Conference, IDCS 2019, Naples, Italy, October 10–12, 2019, Proceedings 12. Springer International Publishing, 2019: 306-317.

[10] W. Hu and Y. Tan. Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv:1702.05983, 2017.

[11] Anderson H S, Kharkar A, Filar B, et al. Learning to evade static PE machine learning malware models via reinforcement learning[J]. arXiv preprint arXiv:1801.08917, 2018.

[12] Mtn, A. and B. Kk, Genetic convolutional neural network forintrusion detection systems – ScienceDirect. Future Generation ComputerSystems, 2020. 113: p. 418-427.

[13] Tariq, S., et al., CAN-ADF: The Controller Area Network AttackDetection Framework. Computers & Security, 2020: p. 101857.

[14] Freitas, P., etal., Intrusion Detection forCyber–Physical Systems Using Generative Adversarial Networks in FogEnvironment. IEEE Internet of Things Journal, 2020.

[15] Lopez-Martin, M.,B. Carro, and A. Sanchez-Esguevillas, Applicationof deep reinforcement learning to intrusion detection for supervised problems.Expert Systems with Application, 2020. 141(Mar.):p. 112963.1-112963.15.

[16] Tang, X., T. Qian,and Z. You, Generating behavior featuresfor cold-start spam review detection with adversarial learning. InformationSciences, 2020. 526: p. 274-288.

[17] Fang, Z., et al., Evading Anti-Malware Engines With DeepReinforcement Learning. IEEE Access, 2019. 7(99): p. 48867-48879.

[18] NSFOCUS, Thoughts on Encrypted Traffic Detection in the Era of Encrypt Everything, 2022

[19] Le Goues C, Nguyen T V, Forrest S, et al. Genprog: A generic method for automatic software repair[J]. Ieee transactions on software engineering, 2011, 38(1): 54-72.

[20] WHITE M, TUFANO M, MARTINEZ M, et al. Sorting and transforming program repair ingredients via deep learning code similarities[J]. arXiv preprint, arXiv:1707.04742, 2017.

[21] https://www.chinaz.com/2022/1026/1461084.shtml

[22]Zhang Ruizi et al., Research on AISecOps Automation Classification and Technology Trends. Inforsec Security, 2020(9): p. 5.

[23] LIU C.Convolution neural network for relation extraction[C]// Proc of the International Conference on Advanced Data Mining and Applications. Berlin: Springer,2013:231-242.

[24] ZHANG D,WANG D.Relation classification via recurrent neural network[J]. The SAO/NASA Astrophysics Data System,2015,15:1-10.

[25] SATYAPANICH T,FERRARO F,FININ T.CASIE:extracting cybersecurity event information from text[C]// Proc of the AAAI.New York:AAAI,2020:879-886.

[26] Jia L, Zhong H, Wang X, et al. An empirical study on bugs inside tensorflow[C]//Database Systems for Advanced Applications: 25th International Conference, DASFAA 2020, Jeju, South Korea, September 24–27, 2020, Proceedings, Part I 25. Springer International Publishing, 2020: 604-620.