The artificial intelligence-based chatbot ChatGPT is undoubtedly a very useful tool, but it can turn into a weapon in the hands of malicious people. Hackers have started to use ChatGPT for their malicious purposes.
The artificial intelligence-based chatbot ChatGPT is often used for tasks such as producing articles, writing code, planning events or learning about new topics. But hackers discovered a new use for neural networks: developing malware.
A few weeks after ChatGPT went public, Check Point Research researchers found that some cybercrime forum members with little or no coding experience used the chatbot to write software and emails that could be used for other illegal activities such as espionage, ransomware, spam, and phishing.
Hackers Showed Great Interest
“Cybercriminals have already shown significant interest in neural networks and are actively using this latest trend to generate malicious code,” the company’s researchers said, although “it is too early to decide whether ChatGPT capabilities will become the new favorite tool of darknet participants.”
A member of the darknet forums posted his first malicious script to ChatGPT, praising the chatbot for its good help. Based on artificial intelligence, ChatGPT is undoubtedly a very useful tool. However, it is a fact that it can turn into an effective weapon in the hands of malicious people.