New Words at RSA: Machine Learning Abuse, XAI, Election Security, and CISA

New Words at RSA: Machine Learning Abuse, XAI, Election Security, and CISA

May 13, 2019 | Mina Hao

The RSA Conference is the world’s biggest IT security meetings of the highest quality. Initially as a small cryptography forum set up in 1991, this conference has developed into a meeting at which renowned security experts from all around the world are invited to discuss the future cyberspace development and global security vendors are attracted to showcase their information security products.

The RSA Conference is a series of security conferences at which a number of topics are discussed. In recent years, artificial intelligence (AI) and machine learning (ML) are tropical topics at such conferences. At RSA 2019, as indicated by Bugra Karabey, senior risk manager from Microsoft, AI and ML technologies have found a ubiquitous application in the cybersecurity field. Currently, ML is the most popular AI technology which is extensively used. Meanwhile, people begin to think about drawbacks and even security risks of ML.

Internet companies always use the inexplainability of machine learning as a pretext for abuse of users’ privacy data and prejudice. Twitter CEO Jack Dorsey cited the “explainable AI” concept when testifying before Senate Intelligence Committee during a hearing. He indicated that Twitter is investing in research in this respect, but in the early phase. With the introduction of General Data Protection Regulation (GDPR), abusing user privacy under the guise of the inexplainability of machine learning has proved increasingly untenable. As user demands for “Internet justice” have gathered momentum, explainable AI has become a matter of great urgency and will get rapid development. Let’s wait and see.

AI Is “Hacking Into” Our Brain

Speaking at the RSA Conference 2019, Anthony J. Ferrante, an executive director of FTI Cybersecurity pointed out that currently, AI has taken such a great amount of our personal information that it can easily obtain all sorts of privacy data and “hack into” our brain, conducting product placement, gaining our confidential access, and even seeking to influence public opinion such as changing the way people cast a ballot.

In his speech, Anthony said that as AI is becoming more intelligent each day, we need to think about AI from the following perspectives: security, privacy, legal, and ethical. Also, he pointed out that hacking the human brain is not necessarily a bad thing. For example, in the healthcare sector, AI could help diagnose behavioral and emotional disorders, recognize brain changes caused by Alzheimer’s years before the first signs appear, identify potentially destructive behaviors, and treat symptoms of depression and other psychological issues.

Anthony indicated that to make proper use of AI and ML technologies, you need to ask yourself the following questions in advance:

  • How can we develop proper solutions to protect against malicious factors?
  • How will the data collected be used?
  • Could any information that is collected be used against users?
  • What are the broader implications of using this new technology?

Hacking with ML

At the RSA Conference 2019, Etienne Greeff and Wicus Ross from Secure Data pointed out that AI & ML is more suited to offensive than defensive applications. Below is a short list of where these offensive applications are.

Then they use two examples to present how to use ML or topic modeling for intrusions and information theft.

In view of the special risks, they put forward the following suggestions:

  • Understand the new threat models that AI & ML may introduce.
  • Understand where data lives and how an attacker might see it.
  • Attach importance to ML and top modeling among other attack vectors that can create new classes of attack.
  • Have a response plan ready.

New Words for Cyberspace AI Security—Election Security

In October 2018, the Department of Homeland Security (DHS) established Cybersecurity and Infrastructure Security Agency (CISA). On the homepage of CISA is put on China conducting malicious cyber activities. Of course, election security, listed as the first item on the homepage, is a matter of paramount importance, which mainly refers to Facebook’s election manipulation by means of AI. AI-related new words are incorporated into the cyberspace security realm, which is a major finding of this RSA conference.

Explainable AI

Explainable AI (XAI) steams from the XAI program which is being launched by the Defense Advanced Research Projects Agency (DARPA) to explore how to make autonomous systems better parse their own behaviors. DARPA has determined that when autonomous systems provide information about suspicious activities to analysts or further checks are required, it is necessary for analysts to get autonomous systems to explain why behaviors such as delivering a specific image or person or certain data to analysts, are performed. According to DARPA’s assumption, the XAI program aims to “produce more explainable models, while maintaining a high level of learning performance; enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”. It can be perceived that XAI is developed in a situation when the Department of Defense (DoD) encounters unexplainable issues during intelligence analysis. In most scenarios, ML only delivers results, for example, in user behavior analysis and advertising of Internet companies. However, when it comes to security and intelligence analysis, just knowing the results is not enough, because cyberspace security disposal and other processes require qualitative other than quantitative analysis.

In intrusion data analysis, it is a must to interpret analysis results. Traditional machine learning algorithms built upon training data, however, are usually “black boxes” that deny interpretation. For this reason, many scholars begin to study the explainable AI (XAI), with a purpose of delivering explainable analysis results, whether using the statistical modeling (for example, topic model frequently mentioned at this year’s RSA conference) or the knowledge graph (semantic network) technology (such as Watson from IBM). Nowadays, AI technologies including ML have been widely applied in numerous scenarios in the security domain. How to continuously bring forward optimized solutions and resolve security risks and issues arising during the actual application of such solutions will be a long-term research project.

Introduction to CISA

The United States Senate passed the Cybersecurity and Infrastructure Security Agency Act (H.R. 3359) in October 2018 which has won unanimous approval from the U.S. Congress and now awaits President Trump’s signature. Cybersecurity and Infrastructure Security Agency (CISA) is responsible for circulating threat intelligence and making national-level emergency responses, within which National Risk Management Center (NRMC), a national-level security operations center, is housed. At the link https://www.dhs.gov/cisa/information-sharing is a video named Months to Milliseconds which depicts that how the Department of Homeland reduces the response time from months to milliseconds after the EINSTEIN Program and other related facilities are deployed. This video brags about three things:

  1. The threat detection system (National Cybersecurity Protection System (NCPS)) involved in the EINSTEIN Program, with the addition of big data analysis, boasts detection, analysis, intelligence sharing, and prevention capabilities.
  2. The Continuous Diagnostics and Mitigation (CDM) program has powerful vulnerability management capabilities.
  3. National Cybersecurity & Communications Integration Center (NCCIC), the national-level security operations center, delivers powerful capabilities.

The video tells a story of an ill-fated employee in the Program Management Office Government Administration Building: This employee receives an APT-style spear-phishing attack email. He clicks it and nothing appears happen. At night, his computer is turned on automatically and he finds that the computer runs slowly and then contacts the administrator who discovers unauthorized access and reports to NICCIC (actually human flesh searches). After that, NCPS (EINSTEIN system of the latest version) sets to make an analysis and shares intelligence to different departments, blocking hundreds of similar attacks. Further human flesh searches find that hackers have changed tactics by turning to 0-day attacks for lateral movement within the network. As the EINSTEIN system has no rules, we can make clear what the adversary’s intention is and what he attacks for, only through human flesh searches. Finally, we find that the newly installed software contains a 0-day vulnerability. In response, we add detection and block rules and a source IP address reputation scoring mechanism to the EINSTEIN system, as well as increase the reputation score of the source IP address of attacks. Then, systems automatically block the communications of this source IP address. After that, CDM sets to work, finding out computers installed with this software and identifying vulnerabilities before distributing patches for upgrade. Finally, the hackers are agitated and one hacker slams his fist onto his computer. Watching this video, we draw the following conclusion: try to increase the attack cost of the adversary and identify threats within seconds to be ahead of the adversary. At last, hackers are at work once again…

By the way, CISA emphasizes that they have no silver bullets (universal weapons), but depend on defense in depth for to build a defense system through a combination of various mature commercial means.

From above, we can conclude that the US acts very quickly in the critical infrastructure security field.