No VPN found

Cybercriminals Using OpenAI's ChatGPT to Develop Malicious Code

Cybercriminals Using OpenAI's ChatGPT to Develop Malicious Code
Jan 10, 2023
ChatGPT made cyberattacks possible for those who have limited experience in cyberattacks. Now, there are more threats in the digital world.
Oguz Dagli
Cybercriminals Using OpenAI's ChatGPT to Develop Malicious Code - FastVPN
Recent reports have uncovered instances of cybercriminals utilizing OpenAI's ChatGPT technology to create code for malicious purposes. This is a growing concern as it means that even those with little technical skills can now launch sophisticated cyber attacks using code generated by ChatGPT.
According to Check Point Research, they have found evidence of cybercriminals using the large language model (LLM) interface that OpenAI made publicly available in November. This is similar to the rise of as-a-service models in the cybercrime world, as ChatGPT opens up another avenue for less-skilled actors to easily launch cyberattacks.
The researchers said "some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all," they wrote. "Although the tools that we present in this report are pretty basic, it's only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad." ChatGPT's machine learning capabilities enable the text-based tool to interact in a conversational way, with users typing a question and receiving an answer in a dialogue format. The technology can also answer follow-up questions and challenge users' answers.
It's worth noting that ChatGPT is also known for producing buggy code, with Stack Overflow banning software generated by the AI system due to its often serious flaws. However, the technology is improving and last month a Finnish government report warned AI systems are already in use for social engineering and in five years could drive a huge surge in attacks.
The sophistication of OpenAI's offering has generated both worry and enthusiasm, with educational institutions, conference organizers, and other groups moving to ban the use of ChatGPT for everything from school papers to research work. The analysts in December demonstrated how ChatGPT can be used to create an entire infection flow, from phishing emails to running a reverse shell. They also used the chatbot to build backdoor malware that can dynamically run scripts created by the AI tool. At the same time, they showed how it can help cybersecurity pros in their work.
Now cybercriminals are testing it. A thread titled "ChatGPT – Benefits of Malware" popped up on December 29 on a widely used underground hacking forum written by a person who said they were experimenting with the interface to recreate common malware strains and techniques. The writer showed the code of a Python-based information stealer that searches for and copies file types and uploads them to a hardcoded FTP server. Check Point confirmed that the code was from a basic stealer malware.
This is a significant concern, as it means that even unskilled cybercriminals can now launch sophisticated attacks using the code generated by ChatGPT. It also highlights the need for stricter oversight and regulation of AI-based technology, as well as for companies to take steps to prevent malicious use of their technology. As the researchers pointed out, "it's only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad." It is important for both the public and private sectors to be aware of these developments and take necessary precautions to protect against potential cyber threats.
This website uses cookies to improve the user experience. To learn more about our cookie policy or withdraw from it, please check our Privacy Policy.