ChatGPT – The new tool of cybercriminals

The danger of using AI software increases
ChatGPT – The new tool of cybercriminals



A guest post by Frank Thias

providers on the topic

The superstar in the AI ​​industry is currently called: ChatGPT. The open source tool already automatically creates texts for millions of people. But cybercriminals have also used it to scan code for vulnerabilities and create working exploits. It is high time for those responsible for security to deal with the dangers and protective measures.

Those responsible for security must now prepare for the imminent danger of misusing AI tools - and use them for their own protection.
Those responsible for security must now prepare for the imminent danger of misusing AI tools – and use them for their own protection.

(Image: Somchai – stock.adobe.com)

The chatbot ChatGPT (Chat Generative Pre-trained Transformer) was only released in November 2022 by OpenAI. He should primarily imitate a human interlocutor, but can also compose music, write essays, answer test questions, play games – or write computer programs.

In doing so, it exhibits the usual weaknesses of AI tools: The results are a reproduction of existing content, albeit often modified. They contain neither creativity nor personal opinion. In addition, the results are often wrong. The tool then claims complete nonsense with absolute certainty or presents its own inventions as facts.

READ:  What is A Passphrase? Are Passphrase and Password The Same?

What makes ChatGPT so inscrutable and at the same time attractive for script kiddies, fuzzing and side channel attacks?  (Image: Vogel IT media)

AI is getting better and better fast

OpenAPI wants to fix these teething problems with all its might. The company recently announced the successor version GPT-4 of the underlying language model. In addition to technical improvements, according to media reports, OpenAI wants to hire 1,000 more temporary workers to train the AI. In addition, 400 software developers should help her with programming code. Mind you, these people don’t write code, they teach the bot how to write it. This also significantly improves the programming capabilities of ChatGPT.

OpenAI strives to prevent misuse through appropriate security precautions and locks. But hackers have so far found very easy ways to circumvent them. For example, in early December 2022, they managed to crack ChatGPT through various prompt engineering techniques, causing the bot to give instructions on how to make a Molotov cocktail or a nuclear bomb.

The new helper for hackers

ChatGTP makes things much easier for cybercriminals: Similar to Google, they get back a search result for a specific question. However, this answer is much more specific, explained and enriched with contextual examples that can be used directly. This makes “hacking” child’s play and also optimized for self-learning. This raises the potential danger for web applications to a whole new level, since even laypersons can use ChatGPT to carry out more complex attacks.

Existing code – including obfuscated and decompiled – code can already be checked for vulnerabilities and working exploits can be created using the GPT-3 language model. Even hackers with no development skills can use it to successfully complete an entire infection process, from crafting a spear phishing email to running a reverse shell.

READ:  What is a Secure Web Gateway (SWG)?

Examples of current attacks

The tool is already being used for numerous attack techniques. For example, a hacker used it to emulate malware strains and techniques, such as a Python-based infostealer. The script created looks for common file types, copies them to a folder, compresses them into a ZIP format, and uploads them to a hardcoded FTP server.

Another piece of code created with ChatGPT is a simple Javascript snippet that downloads programs and stealthily executes them on the system using Powershell. With this, the attacker tried to steal credentials. A PHP code fragment was made available to ChatGPT for a SQL injection. Once the AI ​​tool identifies a code vulnerability, it can create a cURL request to exploit the vulnerability. Similar techniques can be used for other vulnerabilities, such as buffer overflow.

ChatGPT helped a hacker create a Python script with signing, encryption, and decryption capabilities. It generates a cryptographic key to sign files and uses a fixed password to encrypt files in the system. All decryption functions are also implemented in the script. This code can quickly turn into ransomware.

In addition to creating malware, ChatGPT is also suitable for writing phishing emails. The AI ​​tool is also used to develop payment systems for cryptocurrencies on dark web marketplaces or to create AI artworks to be sold on Etsy and other online platforms.

READ:  What is a Threat Intelligence Service?

The artificial intelligence behind the text-based dialogue system ChatGPT shows the possibilities AI also offers for cybercrime.  (Image: phonlamaiphoto - stock.adobe.com)

The danger increases

These attack and fraud possibilities are likely to be constantly expanded and become more and more dangerous. Not only did security experts completely bypass ChatGPT’s built-in content filters by using the API instead of the web version. They also found that ChatGPT can repeatedly modify the generated code and create multiple versions of the same malware. Such polymorphic malware is difficult to detect due to its high flexibility.

Additionally, the ChatGPT API can be leveraged within the malware itself to provide modules for different actions as needed. Since the malware does not behave maliciously as long as it is only stored on the hard drive and often does not contain any suspicious logic, signature-based security tools cannot detect it.

READ:  What is an Intrusion Prevention System (IPS)?

In addition, artificial intelligence can be used as a service (AIaaS). Cyber ​​criminals no longer have to train the models themselves or rely on ready-made open source models. By significantly lowering the barriers to entry, AIaaS also offers hackers without any programming experience access to the latest AI functions via user-friendly APIs – without high costs.

Possible countermeasures

With the rapidly escalating threat of malicious use of AI models, organizations need to take proactive measures. This includes investments in the research and development of detection and defense technologies, overarching strategies such as AI governance frameworks, a comprehensive security approach and regular penetration tests.

This is precisely where AI tools can provide essential assistance. With the help of AIaaS, the execution of red team tests can be improved – in particular through convincing phishing emails. The AI ​​automatically personalizes the content based on the target person’s background and personality. This often works better than manually by the employees of the red team. These can then focus on higher value tasks like building context and gathering information.

In addition, AIaaS-based systems can be used to detect and defend against real phishing attacks. Compared to traditional email filters, language models like GPT-3 can more accurately distinguish between automatically and manually typed texts. Basic AI knowledge and few resources are sufficient for their use.

READ:  7 Open Source Firewalls Based On Linux

AWAF protocol for the blocked request from ChatGPT with the aim of a SQL injection.
AWAF protocol for the blocked request from ChatGPT with the aim of a SQL injection.

(Image: F5)

But even existing tools can effectively fend off AI-based attacks. For example, F5 tested the quality of its Advanced Web Application Firewall (AWAF) with a ChatGPT-generated SQL injection attempt. This was automatically detected and blocked, even after URL encryption.

Conclusion

ChatGPT already delivers mostly solid results that can be used immediately or after some fine-tuning. Since the language models of this and other tools are being improved quickly, the IT landscape should look very different in the near future. Anyone who ignores this acts at their own risk. Security managers must now prepare for the immediate danger of misusing AI tools – and use them for their own protection.

About the author: Frank Thias has been a Principal Systems Engineer in the presales team at F5 in Germany since 2006. He specializes in network security and advises major account clients in the banking, insurance and industrial sectors. He also has extensive knowledge in the areas of application security, cloud security, federation and single sign-on.

(ID:49310044)