How cyber criminals are compromising AI software supply chains


With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.


Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.


No matter whether you use mainstream AI solutions, integrate them into your existing tech stacks via application programming interfaces (APIs) or even develop your own models from open-source foundation models, the entire AI software supply chain is now squarely in the spotlight of cyberattackers.


Poisoning open-source data sets


Open-source components play a critical role in the AI supply chain. Only the largest enterprises have access to the vast amounts of data needed to train a model from scratch, so they have to rely heavily on open-source data sets like LAION 5B or Common Corpus. The sheer size of these data sets also means it’s extremely difficult to maintain data quality and compliance with copyright and privacy laws. By contrast, many mainstream generative AI models like ChatGPT are black boxes in that they use their own curated data sets. This comes with its own set of security challenges.


Verticalized and proprietary models may refine open-source foundation mo ..

Support the originator by clicking the read the rest link below.