According to a new survey from BlackBerry, a majority (51%) of security leaders expect ChatGPT to be at the center of successful cyberattacks within a year.
A survey of 1,500 IT decision makers in North America, the United Kingdom, and Australia found that 71% believe nation states are likely already using technology for malicious purposes against other nations. understood.
ChatGPT is an artificial intelligence (AI)-powered language model developed by OpenAI, deployed in the form of a chatbot, allowing users to receive quick and detailed responses to their questions. The product will be launched at the end of his 2022.
Cyber threats from ChatGPT
Despite its enormous potential, information security experts have expressed concern about the potential for cyber threat actors to use to launch attacks, such as malware development and convincing social engineering scams. increase.
There are also concerns that it could be used to spread misinformation online in a faster and more persuasive way.
These concerns are highlighted in BlackBerry’s new report. While respondents from all countries acknowledged that ChatGPT’s features are used for “good,” 74% saw it as a potential cybersecurity threat.
The top concern for IT leaders was technology’s ability to create more credible and legitimate-sounding phishing emails (53%). Next is enabling less experienced cybercriminals to improve their technical knowledge and develop more specialized skills (49%) and how to use it. Spreading misinformation (49%).
While IT leaders fear ChatGPT crafting phishing emails, one expert warns that AI tools are no better than what cybercriminals can already do.
talk Information securityAt , Recorded Future intelligence analyst Allan Liska pointed out that ChatGPT is not necessarily good at these types of activity. “It can be used to craft phishing emails, but cybercriminals conducting phishing campaigns are already coming up with better emails and more creative ways to carry out phishing attacks. You can, but at least it’s still not good malware,” he explained.
However, this situation changes with the constant occurrence of technical training itself. Liska adds: Both will get better in the end, and we don’t know yet how it will be.”
Strengthening cyber defenses with AI
Commenting on the study, Shishir Singh, CTO of Cybersecurity at BlackBerry, said there is optimism that security professionals can leverage ChatGPT to improve their cyber defenses.
“It is well documented that malicious people have been testing the waters, but we expect to have a better handle on how hackers successfully use ChatGPT for malicious purposes later this year. As a tool to create better mutable malware or as an enabler to strengthen your “skill set”. Both cyber pros and hackers continue to look at how to get the most out of it. Time will tell which is more effective,” he said.
The survey also found that 82% of IT decision makers plan to invest in AI-driven cybersecurity over the next two years, with nearly half (48%) planning to invest by the end of 2023. rice field. Protection solutions will be less effective at defending against increasingly sophisticated attacks emanating from technologies such as ChatGPT.
talk Information securitySingh said it is important for organizations to use AI to proactively combat AI threats, especially when it comes to enhancing their prevention and detection capabilities.
“One of the main benefits of using AI in cybersecurity is the ability to analyze vast amounts of data in real time. The sheer volume of data generated by modern networks makes it impossible for humans to keep up. AI can process data much faster, so we can identify threats more efficiently,” he said.
“As cyber-attacks become more serious and sophisticated, and attackers evolve their tactics, techniques, and procedures (TTPs), traditional security measures become obsolete. AI learns from previous attacks and adapts its defenses. , so we can be more resilient to future threats.”
Singh added that AI is also important in mitigating APTs of advanced and persistent threats.
In addition to cyber threats, privacy experts discuss how AI models can violate data protection regulations such as GDPR. This includes how OpenAI collects data on which ChatGPT is built, and how personal data is shared with third parties.