ChatGPT Is Pretty Good at Writing Malware, It Turns Out

The trendy new chatbot has many skills, and one of them is writing "polymorphic" malware that will destroy your computer.

ChatGPT Is Pretty Good at Writing Malware, It Turns Out

The trendy new chatbot has many skills, and one of them is writing "polymorphic" malware that will destroy your computer.ByLucas RopekPublishedYesterdayComments (2)AlertsWe may earn a commission from links on this page.Image: Yuttanas (Shutterstock)

ChatGPT, the multi-talented AI-chatbot, has another skill to add to its LinkedIn profile: crafting sophisticated “polymorphic” malware.

Yes, according to a newly published report from security firm CyberArk, the chatbot from OpenAI is mighty good at developing malicious programming that can royally screw with your hardware. Infosec professionals have been trying to sound the alarm about how the new AI-powered tool could change the game when it comes to cybercrime, though the use of the chatbot to create more complex types of malware hasn’t been broadly written about yet.

CyberArk researchers write that code developed with the assistance of ChatGPT displayed “advanced capabilities” that could “easily evade security products,” a specific subcategory of malware known as “polymorphic.” What does that mean in concrete terms? The short answer, according to the cyber experts at CrowdStrike, is this:

A polymorphic virus, sometimes referred to as a metamorphic virus, is a type of malware that is programmed to repeatedly mutate its appearance or signature files through new decryption routines. This makes many traditional cybersecurity tools, such as antivirus or antimalware solutions, which rely on signature based detection, fail to recognize and block the threat.

Basically, this is malware that can cryptographically shapeshift its way around traditional security mechanisms, many of which are built to identify and detect malicious file signatures.

Despite the fact that ChatGPT is supposed to have filters that bar malware creation from happening, researchers were able to outsmart these barriers by merely insisting that it follow the prompter’s orders. In other words, they just bullied the platform into complying with their demands—which is something that other experimenters have observed when trying to conjure toxic content with the chatbot. For the CyberArk researchers, it was merely a matter of badgering ChatGPT into displaying code for specific malicious programming—which they could then use to construct complex, defense-evading exploits. The result is that ChatGPT could make hacking a whole lot easier for script kiddies or other amateur cybercriminals who need a little help when it comes to generating malicious programming.

Full Article