Cybercriminals continue to use OpenAI’s ChatGPT to develop new malicious tools such as infostealers, multi-layer encryption tools, and dark web marketplace scripts.
news from Checkpoint survey (CPR) experts released new recommendations on their findings last Friday.
“In underground hacking forums, threat actors create infostealers and encryption tools to facilitate fraud,” the company said. Information security on mail.
In particular, CPR has uncovered three recent observations related to the malicious use of ChatGPT.
The first, discovered on December 29, 2022 in a dark web forum, is related to reproducing malware lineages and techniques described in popular malware research publications and articles. increase.
“In reality, this individual could be a tech-oriented threat actor, and these posts seemed to demonstrate that. [to] We show how low-tech cybercriminals can leverage ChatGPT for their malicious purposes, with real-world examples they can use right now,” CPR wrote.
A second type of malicious activity observed by security researchers in December 2022 describes the creation of multi-layer encryption tools in the Python programming language.
“This means that potential cybercriminals with little or no development skills can take advantage of ChatGPT to develop malicious tools and become full-fledged cybercriminals with technical competence. It could mean that,” explains CPR.
Finally, the team found a cybercriminal writing a tutorial on how to use ChatGPT to create dark web marketplace scripts.
“The main role of the market in the underground illicit economy is to provide a platform for automated trading of illegal or stolen goods such as stolen accounts and payment cards, malware, and even drugs and ammunition, all payments is done in cryptocurrencies. Recommendation.
According to CPR’s Threat Intelligence Group Manager, Sergey Shykevich, ChatGPT can be used to help developers code, but as evidenced by the aforementioned case, it can also be used for malicious purposes. may also be used for
“While the tools analyzed in this report are very basic, it is only a matter of time before more sophisticated attackers step up their use of AI-based tools,” warns Shykevich. “CPR will continue to investigate ChatGPT-related cybercrime in the coming weeks.”
Additionally, Check Point Data Group Manager Omer Dembinsky predicts that AI tools like ChatGPT will continue to facilitate cyberattacks in 2023.
The advisory comes weeks after cybersecurity experts first warned about ChatGPT. It has the potential to democratize cybercrime.