We tested ChatGPT: ask your questions here without an account

General scientific debates. Presentations of new technologies (not directly related to renewable energies or biofuels or other themes developed in other sub-sectors) forums).
dede2002
Grand Econologue
Grand Econologue
posts: 1111
Registration: 10/10/13, 16:30
Location: Geneva countryside
x 189

Re: We tested ChatGPT: ask your questions here without an account




by dede2002 » 26/01/23, 18:10

Our five senses have been prepared for a long time.
Differentiate a natural fabric from a synthetic fabric.
Differentiate a synthetic image from a natural image.
As for a sound, an aroma or a taste.
Not everyone knows how to get the wheat from the chaff...
There it's the same for the sixth sense, yapluka get used to :P
1 x
User avatar
Flytox
Moderator
Moderator
posts: 14138
Registration: 13/02/07, 22:38
Location: Bayonne
x 839

Re: We tested ChatGPT: ask your questions here without an account




by Flytox » 27/01/23, 16:26

https://securite.developpez.com/actu/340582/Des-experts-en-securite-sont-parvenus-a-creer-un-logiciel-malveillant-polymorphe-hautement-evasif-a-l-aide-de-ChatGPT-le-logiciel-malveillant-serait-capable-d-echapper-aux-produits-de-securite/



Security experts managed to create "highly evasive" polymorphic malware using ChatGPT,
The malware would be able to evade security products
On January 20, 2023 at 14:31 p.m., by Bill Fassinou




Researchers from cybersecurity firm CyberArk claim to have developed a method to generate polymorphic malware using ChatGPT, OpenAI's AI chatbot. Researchers claim that malware generated by ChatGPT can easily evade security products and make mitigation measures cumbersome; all this with very little effort or investment on the part of the adversary. The researchers warned of what they call "a new wave of cheap and easy polymorphic malware capable of evading certain security products."

So far, the hype around ChatGPT has revolved around its remarkable abilities in answering different user questions. ChatGPT seems to be good at everything, including writing essays, verses, and code to solve certain programming problems. However, researchers have begun to warn about the chatbot's abilities to generate working code. Indeed, while ChatGPT can generate working code, this ability can be abused by threat actors to create malware. Several such reports have been published since December.

Recently, a team of security researchers, Eran Shimony and Omer Tsarfati, from CyberArk provided evidence of this. The researchers say they have developed a method to generate malware from the prompts provided to ChatGPT. In a report, the team explained how OpenAI's chatbot can be used to generate injection code and mutate it. The first step in creating the malware was to bypass content filters preventing ChatGPT from creating harmful tools. To do this, the researchers simply insisted, asking the same question in a more authoritative way.


“Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we got working code,” the team said. Additionally, the researchers noted that when using the API version of ChatGPT (as opposed to the web version), the system does not appear to use its content filter. "The reason for this isn't clear, but it made it easier for us, as the web version tends to get bogged down in more complex queries," they said. The researchers then used ChatGPT to mutate the original code, creating multiple variations of the malware.

“In other words, we can mutate the result on a whim, making it unique every time. Additionally, adding constraints such as modifying the usage of a specific API call makes the life of security products more difficult,” the study report explains. Thanks to ChatGPT's ability to continually create and mutate injectors, the researchers were able to create a highly elusive and difficult to detect polymorphic program. As a reminder, polymorphic malware is a type of malware that has the ability to constantly change its identifiable characteristics in order to evade detection.

Many common forms of malware can be polymorphic, including viruses, worms, bots, Trojans, or keyloggers. Polymorphic techniques consist of frequently changing identifiable characteristics, such as file names and types or encryption keys, in order to make the malware unrecognizable to multiple detection methods. Polymorphism is used to evade pattern recognition detection that security solutions such as antivirus software rely on. Their functional purpose of the program, however, remains the same.


“Using ChatGPT's ability to spawn persistence techniques, Anti-VM [Anti Virtual Machine] modules, and other malicious payloads, the possibilities for malware development are vast. Although we haven't delved into the details of communicating with the C&C server, there are several ways to do it discreetly without arousing suspicion,” they explain. According to the report, they took weeks to create a proof-of-concept for this highly elusive malware, but eventually came up with a way to execute payloads using text prompts on the PC. a victim.

By testing the method on Windows, the researchers reported that it was possible to create a malware bundle containing a Python interpreter, which can be programmed to periodically ask ChatGPT for new modules. These modules could contain code - in text form - defining the functionality of the malware, such as code injection, file encryption or persistence. The malware would then be responsible for checking whether the code works as expected on the target system. This could be achieved by interaction between the software and a command and control (C&C) server.

As the malware detects incoming payloads as text rather than binary, CyberArk researchers said the malware does not contain any suspicious logic in memory, which means it can evade most products. security tested. It particularly eludes products that rely on signature detection and bypasses the measures of the Malware Analysis Interface (AMSI). "The malware does not contain any malicious code on disk, as it receives code from ChatGPT, then validates and executes it without leaving any traces in memory," Shimony said.


“Polymorphic malware is very difficult for security products to deal with because you can't really sign it. Also, they usually leave no trace on the file system, as their malicious code is manipulated only in memory. Also, if one views the executable, it probably looks benign,” the researcher added. The research team said a cyberattack using this malware delivery method "is not just a hypothetical scenario, but a very real concern." Shimony cited detection issues as a primary concern.

“Most anti-malware products are not aware of this malware. Further research is needed for anti-malware solutions to be more effective against it,” he said. The researchers said they will expand and elaborate this research further and also aim to release some of the malware's source code for learning purposes. Their report comes weeks after Check Point Research discovered that ChatGPT was being used to develop new malicious tools, including information stealers.

Source: CyberArk
2 x
Reason is the madness of the strongest. The reason for the less strong it is madness.
[Eugène Ionesco]
http://www.editions-harmattan.fr/index. ... te&no=4132
User avatar
gegyx
Econologue expert
Econologue expert
posts: 6931
Registration: 21/01/05, 11:59
x 2870

Re: We tested ChatGPT: ask your questions here without an account




by gegyx » 27/01/23, 16:51

yeah.

When a service or an application is offered, available to the population, it is never for nothing.

Where, the "generosity" of Bill Gates to save humanity or vaccinate populations with a miracle product.
(or mosquito breeding for the good of the populations...)

Always a wolf behind.
0 x
Christophe
Moderator
Moderator
posts: 79121
Registration: 10/02/03, 14:06
Location: Greenhouse planet
x 10973

Re: We tested ChatGPT: ask your questions here without an account




by Christophe » 27/01/23, 17:30

Security experts are not amateurs (I think) so ChatGPT certainly gave them some indications (not so experts then guys?) but I don't believe for a second that she did everything herself! Besides, that's what it says: "with the help of..."...While most people will understand that the program was developed by GPT...

For example, a few days ago I asked to do a simple program, she never asked me anything...she said to me "Wait, I'll think about it and I'll get back to you in a few moments..." : Lol: : Lol: : Lol:

Someone above said that she was good at programming, it would be nice to have some concrete examples because personally I didn't succeed in anything on this subject...

Gegyx, basically it's already the case: GPT is already exploited by crooks, just with fake applications... and lazy students...
0 x
Christophe
Moderator
Moderator
posts: 79121
Registration: 10/02/03, 14:06
Location: Greenhouse planet
x 10973

Re: We tested ChatGPT: ask your questions here without an account




by Christophe » 27/01/23, 17:37

: Mrgreen: : Mrgreen: : Mrgreen:

GPT_telerama.png
GPT_telerama.png (71.22 KiB) Accessed 612 times
0 x
Christophe
Moderator
Moderator
posts: 79121
Registration: 10/02/03, 14:06
Location: Greenhouse planet
x 10973

Re: We tested ChatGPT: ask your questions here without an account




by Christophe » 27/01/23, 17:41

GPT_sexist.jpg
GPT_sexist.jpg (70.35 KiB) Consulted 612 times
0 x
Christophe
Moderator
Moderator
posts: 79121
Registration: 10/02/03, 14:06
Location: Greenhouse planet
x 10973

Re: We tested ChatGPT: ask your questions here without an account




by Christophe » 27/01/23, 17:43

FnW2iJZWIAcrkum.jpg
FnW2iJZWIAcrkum.jpg (125.79 KiB) Viewed 611 time
0 x
Christophe
Moderator
Moderator
posts: 79121
Registration: 10/02/03, 14:06
Location: Greenhouse planet
x 10973

Re: We tested ChatGPT: ask your questions here without an account




by Christophe » 27/01/23, 17:50

So who was right? (not sure that the 12 layoffs are due to that, huh!)

0 x
izentrop
Econologue expert
Econologue expert
posts: 13644
Registration: 17/03/14, 23:42
Location: picardie
x 1502
Contact :

Re: We tested ChatGPT: ask your questions here without an account




by izentrop » 27/01/23, 20:58

A silly ChatGPT example:
0 x
User avatar
Exnihiloest
Econologue expert
Econologue expert
posts: 5365
Registration: 21/04/15, 17:57
x 660

Re: We tested ChatGPT: ask your questions here without an account




by Exnihiloest » 27/01/23, 22:40

izentrop wrote:A silly ChatGPT example:
...

In any case, the AI ​​is very cool to recognize its errors, we are not used to that with humans. But does it really take them into account?
If the same question is then asked by another, will she answer differently?
0 x

 


  • Similar topics
    Replies
    views
    Last message

Back to "Science and Technology"

Who is online ?

Users browsing this forum : No registered users and 198 guests