How generative AI Is expanding the insider threat attack surface


As the adoption of generative AI (GenAI) soars, so too does the risk of insider threats. This puts even more pressure on businesses to rethink security and confidentiality policies.


In just a few years, artificial intelligence (AI) has radically changed the world of work. 61% of knowledge workers now use GenAI tools — particularly OpenAI’s ChatGPT — in their daily routines. At the same time, business leaders, often partly driven by a fear of missing out, are investing billions in tools powered by GenAI. It’s not just chatbots they’re investing in either, but image synthesizers, voice cloning software and even deepfake video technology for creating virtual avatars.


We’re still some way off from GenAI becoming indistinguishable from humans. Even if  — or perhaps when — that actually happens, then the ethical and cyber risks that come with it will continue to grow. After all, when it becomes impossible to tell whether or not someone or something is real, the risk of people being unwittingly manipulated by machines surges.


GenAI and the risk of data leaks


Much of the conversation about security in the era of GenAI concerns its implications in social engineering and other external threats. But infosec professionals must not overlook how the technology can greatly expand insider threat attack surface, too.


Given the rush to adopt GenAI tools, many companies ha ..

Support the originator by clicking the read the rest link below.