Our five senses have been prepared for a long time.
Differentiate a natural fabric from a synthetic fabric.
Differentiate a synthetic image from a natural image.
As for a sound, an aroma or a taste.
Not everyone knows how to get the wheat from the chaff...
There it's the same for the sixth sense, yapluka get used to
We tested ChatGPT: ask your questions here without an account
Re: We tested ChatGPT: ask your questions here without an account
https://securite.developpez.com/actu/340582/Des-experts-en-securite-sont-parvenus-a-creer-un-logiciel-malveillant-polymorphe-hautement-evasif-a-l-aide-de-ChatGPT-le-logiciel-malveillant-serait-capable-d-echapper-aux-produits-de-securite/
Security experts managed to create "highly evasive" polymorphic malware using ChatGPT,
The malware would be able to evade security products
On January 20, 2023 at 14:31 p.m., by Bill Fassinou
Researchers from cybersecurity firm CyberArk claim to have developed a method to generate polymorphic malware using ChatGPT, OpenAI's AI chatbot. Researchers claim that malware generated by ChatGPT can easily evade security products and make mitigation measures cumbersome; all this with very little effort or investment on the part of the adversary. The researchers warned of what they call "a new wave of cheap and easy polymorphic malware capable of evading certain security products."
So far, the hype around ChatGPT has revolved around its remarkable abilities in answering different user questions. ChatGPT seems to be good at everything, including writing essays, verses, and code to solve certain programming problems. However, researchers have begun to warn about the chatbot's abilities to generate working code. Indeed, while ChatGPT can generate working code, this ability can be abused by threat actors to create malware. Several such reports have been published since December.
Recently, a team of security researchers, Eran Shimony and Omer Tsarfati, from CyberArk provided evidence of this. The researchers say they have developed a method to generate malware from the prompts provided to ChatGPT. In a report, the team explained how OpenAI's chatbot can be used to generate injection code and mutate it. The first step in creating the malware was to bypass content filters preventing ChatGPT from creating harmful tools. To do this, the researchers simply insisted, asking the same question in a more authoritative way.
“Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we got working code,” the team said. Additionally, the researchers noted that when using the API version of ChatGPT (as opposed to the web version), the system does not appear to use its content filter. "The reason for this isn't clear, but it made it easier for us, as the web version tends to get bogged down in more complex queries," they said. The researchers then used ChatGPT to mutate the original code, creating multiple variations of the malware.
“In other words, we can mutate the result on a whim, making it unique every time. Additionally, adding constraints such as modifying the usage of a specific API call makes the life of security products more difficult,” the study report explains. Thanks to ChatGPT's ability to continually create and mutate injectors, the researchers were able to create a highly elusive and difficult to detect polymorphic program. As a reminder, polymorphic malware is a type of malware that has the ability to constantly change its identifiable characteristics in order to evade detection.
Many common forms of malware can be polymorphic, including viruses, worms, bots, Trojans, or keyloggers. Polymorphic techniques consist of frequently changing identifiable characteristics, such as file names and types or encryption keys, in order to make the malware unrecognizable to multiple detection methods. Polymorphism is used to evade pattern recognition detection that security solutions such as antivirus software rely on. Their functional purpose of the program, however, remains the same.
“Using ChatGPT's ability to spawn persistence techniques, Anti-VM [Anti Virtual Machine] modules, and other malicious payloads, the possibilities for malware development are vast. Although we haven't delved into the details of communicating with the C&C server, there are several ways to do it discreetly without arousing suspicion,” they explain. According to the report, they took weeks to create a proof-of-concept for this highly elusive malware, but eventually came up with a way to execute payloads using text prompts on the PC. a victim.
By testing the method on Windows, the researchers reported that it was possible to create a malware bundle containing a Python interpreter, which can be programmed to periodically ask ChatGPT for new modules. These modules could contain code - in text form - defining the functionality of the malware, such as code injection, file encryption or persistence. The malware would then be responsible for checking whether the code works as expected on the target system. This could be achieved by interaction between the software and a command and control (C&C) server.
As the malware detects incoming payloads as text rather than binary, CyberArk researchers said the malware does not contain any suspicious logic in memory, which means it can evade most products. security tested. It particularly eludes products that rely on signature detection and bypasses the measures of the Malware Analysis Interface (AMSI). "The malware does not contain any malicious code on disk, as it receives code from ChatGPT, then validates and executes it without leaving any traces in memory," Shimony said.
“Polymorphic malware is very difficult for security products to deal with because you can't really sign it. Also, they usually leave no trace on the file system, as their malicious code is manipulated only in memory. Also, if one views the executable, it probably looks benign,” the researcher added. The research team said a cyberattack using this malware delivery method "is not just a hypothetical scenario, but a very real concern." Shimony cited detection issues as a primary concern.
“Most anti-malware products are not aware of this malware. Further research is needed for anti-malware solutions to be more effective against it,” he said. The researchers said they will expand and elaborate this research further and also aim to release some of the malware's source code for learning purposes. Their report comes weeks after Check Point Research discovered that ChatGPT was being used to develop new malicious tools, including information stealers.
Source: CyberArk
2 x
Reason is the madness of the strongest. The reason for the less strong it is madness.
[Eugène Ionesco]
http://www.editions-harmattan.fr/index. ... te&no=4132
[Eugène Ionesco]
http://www.editions-harmattan.fr/index. ... te&no=4132
Re: We tested ChatGPT: ask your questions here without an account
yeah.
When a service or an application is offered, available to the population, it is never for nothing.
Where, the "generosity" of Bill Gates to save humanity or vaccinate populations with a miracle product.
(or mosquito breeding for the good of the populations...)
Always a wolf behind.
When a service or an application is offered, available to the population, it is never for nothing.
Where, the "generosity" of Bill Gates to save humanity or vaccinate populations with a miracle product.
(or mosquito breeding for the good of the populations...)
Always a wolf behind.
0 x
-
- Moderator
- posts: 79121
- Registration: 10/02/03, 14:06
- Location: Greenhouse planet
- x 10973
Re: We tested ChatGPT: ask your questions here without an account
Security experts are not amateurs (I think) so ChatGPT certainly gave them some indications (not so experts then guys?) but I don't believe for a second that she did everything herself! Besides, that's what it says: "with the help of..."...While most people will understand that the program was developed by GPT...
For example, a few days ago I asked to do a simple program, she never asked me anything...she said to me "Wait, I'll think about it and I'll get back to you in a few moments..."
Someone above said that she was good at programming, it would be nice to have some concrete examples because personally I didn't succeed in anything on this subject...
Gegyx, basically it's already the case: GPT is already exploited by crooks, just with fake applications... and lazy students...
For example, a few days ago I asked to do a simple program, she never asked me anything...she said to me "Wait, I'll think about it and I'll get back to you in a few moments..."
Someone above said that she was good at programming, it would be nice to have some concrete examples because personally I didn't succeed in anything on this subject...
Gegyx, basically it's already the case: GPT is already exploited by crooks, just with fake applications... and lazy students...
0 x
Do a image search or an text search - Netiquette of forum
-
- Moderator
- posts: 79121
- Registration: 10/02/03, 14:06
- Location: Greenhouse planet
- x 10973
-
- Moderator
- posts: 79121
- Registration: 10/02/03, 14:06
- Location: Greenhouse planet
- x 10973
-
- Moderator
- posts: 79121
- Registration: 10/02/03, 14:06
- Location: Greenhouse planet
- x 10973
-
- Moderator
- posts: 79121
- Registration: 10/02/03, 14:06
- Location: Greenhouse planet
- x 10973
Re: We tested ChatGPT: ask your questions here without an account
So who was right? (not sure that the 12 layoffs are due to that, huh!)
0 x
Do a image search or an text search - Netiquette of forum
- Exnihiloest
- Econologue expert
- posts: 5365
- Registration: 21/04/15, 17:57
- x 660
Re: We tested ChatGPT: ask your questions here without an account
izentrop wrote:A silly ChatGPT example:
...
In any case, the AI is very cool to recognize its errors, we are not used to that with humans. But does it really take them into account?
If the same question is then asked by another, will she answer differently?
0 x
-
- Similar topics
- Replies
- views
- Last message
-
- 3 Replies
- 1772 views
-
Last message by Janic
View the latest post
02/10/23, 08:23A subject posted in the forum : Science and Technology
-
- 13 Replies
- 7277 views
-
Last message by plasmanu
View the latest post
30/05/12, 20:52A subject posted in the forum : Science and Technology
Back to "Science and Technology"
Who is online ?
Users browsing this forum : No registered users and 198 guests