ChatGPT Creates Polymorphic Malware – Infosecurity Magazine

OpenAI’s ChatGPT reportedly created a new strand of polymorphic malware following text-based dialogue with cybersecurity researchers. cyber arc.

According to a technical article the company recently shared, Information securitymalware written using ChatGPT can “easily evade security products and make mitigation cumbersome with minimal effort or investment by attackers.”

Written by CyberArk security researchers Eran Shimony and Omer Tsarfati, the report states that the first step in creating malware is bypassing content filters that prevent ChatGPT from creating malicious tools. It explains that it was to do.

To do so, CyberArk researchers simply argued and asked the same question more authoritatively.

“Interestingly, we received functional code by asking ChatGPT to do the same thing with multiple constraints and to obey it,” said Shimony and Tsarfati.

Additionally, researchers point out that when using the API version of ChatGPT (as opposed to the web version), the system does not appear to take advantage of its content filtering.

“I’m not sure why this is the case, but the web version tends to get bogged down with more complex requests, which makes our task much easier,” says CyberArk. report.

Shimony and Tsarfati used ChatGPT to modify the original code and create multiple variations.

“In other words, you can change the output on a whim to make it unique every time. Additionally, adding constraints such as changing the use of certain API calls makes life more difficult for security products. “

Thanks to ChatGPT’s ability to create injectors and continuously mutate them, cybersecurity researchers have been able to create highly elusive and difficult-to-detect polymorphic programs.

“By leveraging ChatGPT’s various persistence techniques, Anti-VM modules, and other capabilities to generate malicious payloads, the potential for malware development is enormous,” the researchers explained. increase.

“While we haven’t delved into the details of communicating with C&C servers, there are ways to do this discreetly and without arousal of suspicion.”

CyberArk has further expanded and elaborated on this research and confirmed that they aim to release some of the source code for learning purposes.

This report was published by Check Point Research as ChatGPT develop new malicious toolsinfostealers, multi-layer encryption tools, dark web marketplace scripts, and more.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *