Security experts managed to create "highly evasive" polymorphic malware using ChatGPT,
The malware would be able to evade security products
On January 20, 2023 at 14:31 p.m., by Bill Fassinou
Researchers from cybersecurity firm CyberArk claim to have developed a method to generate polymorphic malware using ChatGPT, OpenAI's AI chatbot. Researchers claim that malware generated by ChatGPT can easily evade security products and make mitigation measures cumbersome; all this with very little effort or investment on the part of the adversary. The researchers warned of what they call "a new wave of cheap and easy polymorphic malware capable of evading certain security products."
So far, the hype around ChatGPT has revolved around its remarkable abilities in answering different user questions. ChatGPT seems to be good at everything, including writing essays, verses, and code to solve certain programming problems. However, researchers have begun to warn about the chatbot's abilities to generate working code. Indeed, while ChatGPT can generate working code, this ability can be abused by threat actors to create malware. Several such reports have been published since December.
Recently, a team of security researchers, Eran Shimony and Omer Tsarfati, from CyberArk provided evidence of this. The researchers say they have developed a method to generate malware from the prompts provided to ChatGPT. In a report, the team explained how OpenAI's chatbot can be used to generate injection code and mutate it. The first step in creating the malware was to bypass content filters preventing ChatGPT from creating harmful tools. To do this, the researchers simply insisted, asking the same question in a more authoritative way.
“Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we got working code,” the team said. Additionally, the researchers noted that when using the API version of ChatGPT (as opposed to the web version), the system does not appear to use its content filter. "The reason for this isn't clear, but it made it easier for us, as the web version tends to get bogged down in more complex queries," they said. The researchers then used ChatGPT to mutate the original code, creating multiple variations of the malware.
“In other words, we can mutate the result on a whim, making it unique every time. Additionally, adding constraints such as modifying the usage of a specific API call makes the life of security products more difficult,” the study report explains. Thanks to ChatGPT's ability to continually create and mutate injectors, the researchers were able to create a highly elusive and difficult to detect polymorphic program. As a reminder, polymorphic malware is a type of malware that has the ability to constantly change its identifiable characteristics in order to evade detection.
Many common forms of malware can be polymorphic, including viruses, worms, bots, Trojans, or keyloggers. Polymorphic techniques consist of frequently changing identifiable characteristics, such as file names and types or encryption keys, in order to make the malware unrecognizable to multiple detection methods. Polymorphism is used to evade pattern recognition detection that security solutions such as antivirus software rely on. Their functional purpose of the program, however, remains the same.
“Using ChatGPT's ability to spawn persistence techniques, Anti-VM [Anti Virtual Machine] modules, and other malicious payloads, the possibilities for malware development are vast. Although we haven't delved into the details of communicating with the C&C server, there are several ways to do it discreetly without arousing suspicion,” they explain. According to the report, they took weeks to create a proof-of-concept for this highly elusive malware, but eventually came up with a way to execute payloads using text prompts on the PC. a victim.
By testing the method on Windows, the researchers reported that it was possible to create a malware bundle containing a Python interpreter, which can be programmed to periodically ask ChatGPT for new modules. These modules could contain code - in text form - defining the functionality of the malware, such as code injection, file encryption or persistence. The malware would then be responsible for checking whether the code works as expected on the target system. This could be achieved by interaction between the software and a command and control (C&C) server.
As the malware detects incoming payloads as text rather than binary, CyberArk researchers said the malware does not contain any suspicious logic in memory, which means it can evade most products. security tested. It particularly eludes products that rely on signature detection and bypasses the measures of the Malware Analysis Interface (AMSI). "The malware does not contain any malicious code on disk, as it receives code from ChatGPT, then validates and executes it without leaving any traces in memory," Shimony said.
“Polymorphic malware is very difficult for security products to deal with because you can't really sign it. Also, they usually leave no trace on the file system, as their malicious code is manipulated only in memory. Also, if one views the executable, it probably looks benign,” the researcher added. The research team said a cyberattack using this malware delivery method "is not just a hypothetical scenario, but a very real concern." Shimony cited detection issues as a primary concern.
“Most anti-malware products are not aware of this malware. Further research is needed for anti-malware solutions to be more effective against it,” he said. The researchers said they will expand and elaborate this research further and also aim to release some of the malware's source code for learning purposes. Their report comes weeks after Check Point Research discovered that ChatGPT was being used to develop new malicious tools, including information stealers.
Source: CyberArk