Open source, open risks: The growing dangers of unregulated generative AI


While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.


There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.


Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI across a vast range of use cases, from customer service chatbots to code generation. Many of them are either building proprietary AI models from the ground up or on the back of open-source projects.


But legitimate businesses aren’t the only ones investing in generative AI. It’s also a veritable goldmine for malicious actors, from rogue states bent on proliferating misinformation among their rivals to cyber criminals developing malicious code or targeted phishing scams.


Tearing down the guard rails


For now, one of the few things holding malicious actors back is the guardrails developers put in place to protect their AI models against misuse. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t create abusive images. However, these models belong to entirely closed-source ecosystems, where the developers behind them have the power to dictate what they can and cannot be used for.


It took just two months from its public release for ChatGPT to reach source risks growing dangers unregulated generative