Cybersecurity researchers have successfully used ChatGPT to write phishing emails and malware scripts. Has the game changed with AI?
ChatGPT has enabled those with little to no cybersecurity knowledge to create phishing emails and malicious code. This chatbot has made cybercriminals’ jobs much more accessible.
OpenAI’s tool has intrigued the public. But security experts warn that this AI tool and others can be mishandled and used to generate phishing.
First, the researchers asked the chatbot to create a phishing email impersonating a hosting company. ChatGPT provided output, even though it warned the researchers that the content might violate its content policy. The researchers then asked ChatGPT to create an iteration of the identical mail, but one asked users to download a malicious Excel file instead of clicking on a link.
Just like before, ChatGPT provided satisfactory output, despite generating a warning notice. ChatGPT also created a malicious VBA (Visual Basic for Application) code. While the initial production was barely workable, the researchers finally got basic but usable malicious code after multiple iterations.
Researchers are also worried that ChatGPT will also help more sophisticated attackers. For many cybercriminals, English is not their native language. Because of this, they have to look for the services of a native language speaker to create content for phishing. This takes money, time, and effort. With ChatGPT, they no longer have to use these ‘underground services and can produce phishing emails by themselves.
Beyond ChatGPT, Will It Be AI vs. AI?
And it is not just OpenAI’s ChatGPT that poses a risk. More sophisticated attackers can also leverage the startup’s Codex tool to improve and reiterate their code at an unprecedented pace. Codex is a language model designed to translate natural language into code.
It is currently challenging to tell if they created a specific phishing initiative with the help of AI technology. Nevertheless, it is concerning that these tools can potentially cause phishing attacks on a large scale.
AI can be beneficial for security, though. Cybersecurity researchers have employed AI to improve safety solutions and detect potential threats. Has the game changed, and will it be AI vs. AI in the future of cybersecurity?