Generative AI used in incident response

Generative AI is currently considered for use in various fields, and its application in the field of cybersecurity is no exception. While it is expected to be used for "defense", there is concern that it may also be used for "attacks." This article introduces how generative AI can be used for "defense," especially in the "incident response" phase.

Cybersecurity and operational efficiency through generative AI

Generative AI is used in various fields such as text, images, voice, and code creation. There are many people who use ChatGPT, Microsoft Copilot, and other generative AI to create business documents in minutes.

Although there are many people who feel uneasy about the accuracy and precision of generative AI in business applications, it has many strengths such as being able to summarize and translate large amounts of data and text at lightning speed. Therefore, attention is focusing on the automation and efficiency of business operations by utilizing the strengths of generative AI. Of course, at the same time, risks such as attacks using generative AI or information leakage due to incorrect use are sometimes discussed. However, the Tokyo Metropolitan Government has published guidelines on document generative AI (* 1), and the development of mechanisms and environments to encourage correct use is progressing.

In cybersecurity operations, there are also expectations for operational efficiency using generative AI. It is thought that generative AI can be used in a wide range of fields, such as acquisition and application of threat intelligence and report creation. In this article, we focus on "operational efficiency in cybersecurity operations" brought about by generative AI.

Let's use Generative AI in Cybersecurity Incident Response Operations!

In this article, we will introduce an example of using generative AI in cybersecurity incident response operations (Hereinafter, incident response operations). In incident response operations, it is necessary to derive response policies and methods based on a large amount of past data. This is an operation where generative AI, which is good at extracting information and coming up with ideas from a large amount of data, can easily be used.

NIST's Cybersecurity Framework (NIST CSF * 2) is divided into five phases: Identification, Defense, Detection, Response, and Recovery. In many cases, products and services such as SIEM and EDR are used to identify, defend, and detect cybersecurity threats, and much depends on the performance, design, and configuration of the products themselves. In many cases, these products already have AI functions. In contrast, in the Response and Recovery phases, it is necessary to use knowledge and human resources to analyze the detected information and derive appropriate response and recovery procedures. For operations that rely heavily on human response capabilities, generative AI can be expected to improve operational efficiency. Therefore, we used generative AI for operations in the two phases of Response and Recovery.

Figure 1: Study of AI Application in the Cybersecurity Framework (NIST CSF)

In "Response" and "Recovery" phases, analysis and detailed procedures are performed for the detected incident information, mainly using the following information.

Public information: Publicly available information on the Internet (vulnerability information, latest threats, etc.)

Internal information: Internal information (Business content, employee information, etc.), Incident response manual, Past incident response information, etc.

Because of the huge amount of data involved, it takes a lot of time to manually process the information. The actual time depends on the type of incident and the amount of internal information, but even for a dedicated member, it can take several days.

Figure 2: Typical incident response/recovery business image

Therefore, we made the generative AI refer to and learn this information to test whether appropriate incident response/recovery is possible.

Figure 3: Business image of incident response/recovery using generative AI

We used SaaS such as ChatGPT and Copilot to collect public information. On the other hand, for internal information containing highly confidential information, we built a secure environment using Azure Open AI/Vertex AI, which can build our own environment, so that we can search and analyze necessary information in that environment. In this environment, we will ask the generative AI questions for incident response work, and test how much useful information can actually be obtained in the incident response and recovery phases.

Prompt Engineering

At the beginning of the test, the quality of the answers varied, with appropriate information sometimes being output correctly and sometimes not. This pattern of not responding with correct information included "giving wrong answers" and "failing to output answers and causing errors".

Figure 4: Examples of the problems we faced

Therefore, we conducted "prompt engineering" (* 3). Prompt engineering is to construct and devise detailed instructions for the generative AI. This is one of the important tasks to improve the quality of the output. Specifically, it specifies the granularity of output content and the appearance of sentences. Many methods of prompt engineering have been devised and studied, but the following are examples of methods that were highly effective in this trial.

Figure 5: Examples of prompt engineering

Using these techniques, we can make the instructions to the generative AI more concrete and create a prompt. Below is an example of a prompt with the purpose, "I would like advice on how to deal with the current incident based on past cases".

Figure 6: Example prompt

When we asked the generative AI to output the prompt, the answer was:

Figure 7: A successful case study

In this way, prompt engineering enabled us to provide accurate, sequenced, and organized responses. By providing specific instructions, we were able to improve the quality of responses from the generative AI and collect the information that the responders wanted.

Effects Obtained by Using Generative AI

We have shown the possibility of obtaining high-quality responses even in incident response operations by having the generative AI learn the necessary information and provide appropriate instructions. By receiving assistance in response procedures by the generative AI, we can expect effects such as enabling even inexperienced responders who were not able to take initiative to provide high-quality responses.

In addition, in terms of reducing operational hours, when using the generative AI, we can expect to reduce operational hours by approximately 25% on average compared to the manual method (These figures are only for reference as they depend on the experience and skills of the incident response personnel). This will enable security engineers to respond to more incidents, and in their spare time, they will be able to start work, such as improvement activities and training of new employees that were pushed to the side due to lack of time.

Further Potential of Generative AI in Incident Response Operations

While we have confirmed the effectiveness of using the generative AI, we have not yet been able to eliminate concerns about the accuracy and flexibility of responses. In this test, the output results of the generative AI were required to be checked and verified by workers. There is also a problem that the quality of answers will be significantly degraded if questions are asked about information that is not provided to generative AI.

One possibility, though unproven, is the application of a method called "fine-tuning" in which the AI model itself is changed. Rather than using the existing models provided by generative AI's SaaS or PaaS, if we can proceed with verification based on our own interpretation of information through fine-tuning, we may be able to obtain more accurate and more applicable answers.

In addition to fine-tuning, we expect high-quality AI and diverse approaches as technology evolves on a daily basis. Currently, generative AI has a strong position as a support tool for workers, but the time may come when it will be fully automated or replaced by humans. We will continue to support cybersecurity operations using cutting-edge technology in line with the needs of the times.

Asahi Shimizu

Asahi Shimizu

NTT DATA Japan Corporation

He is responsible for SOC/CSIRT operation support, security product proof of concept, and various verifications.

Shigeaki Kurimoto

Shigeaki Kurimoto

NTT DATA Japan Corporation

He is responsible for global SOC/CSIRT operations and internal impropriety countermeasures (UEBA operations, etc.).