AI and the cyber landscape

AI and the cyber landscape

As Artificial Intelligence (AI) disrupts almost every sector of the economy, opportunistic cybercriminals are experimenting with its malicious applications. In July, reports emerged that large language models (LLMs) specialised in generating malicious content were being sold on the dark web. James Tytler, pictured, Cyber Associate, at the corporate intelligence and cyber security consultancy S-RM, explores how cybercriminals could exploit AI and alter the threat landscape, but also how the threat from these tools can be easily overstated.


The rise of ‘dark’ chatbots


LLMs are a form of generative AI trained on vast amounts of written input. They produce human quality text, images, code, and other media in response to prompts, and their rapid development has sparked concerns from the security community that malicious actors will use them to code malware or draft convincing phishing emails. In April, Europol warned that LLMs could be abused to commit fraud “faster, much more authentically, and at a significantly increased scale”.


In response to these concerns, the developers of major publicly accessible LLMs have hastily introduced constraints on what prompts their chatbots will accept. This includes denying so-called “jailbreaking” prompts intended to bypass ethical boundaries. OpenAI’s ChatGPT, the first commercially available LLM, once allowed users to request a phishing email by presenting it as part of a training exercise. ChatGPT now refuses to engage with any prompt containing terms such as “malware” or “phishing”, regardless of context.


In early July, however, cybercriminals began selling access to what are described as unrestricted “evil clones” of ChatGPT on the dark web. The most well-known, “WormGPT”, is allegedly offered on a subscription basis and marketed for its ability to produce phishing emails and malware. Following a spike in media attention, the purported developer of Wor ..

Support the originator by clicking the read the rest link below.