Illuminating the Shadows: Managing the Risks of Shadow AI in Modern Enterprises

Illuminating the Shadows: Managing the Risks of Shadow AI in Modern Enterprises

Understanding the challenge of Shadow AI

Shadow AI – a dramatic term for a new problem. With the rise of widely available consumer level AI services with easy-to-use chat interfaces, anyone from the summer intern to the CEO can easily use these shiny and new AI products. However, anyone who’s ever used a chatbot can understand the challenges and risks that tools like this can pose. They are very open-ended, sometimes not very useful unless implemented properly (remember SmarterChild??) and the quality and content of responses heavily depend on the person using them.

Many companies today are unsure how to regulate their employees’ use of these tools, particularly because of the open-ended nature of interaction and the lightweight browser-based interface. There is the risk that employees enter confidential or sensitive information, and an InfoSec team would have no visibility into it. Currently, there is almost no regulation around what can be used as training data for AI models, so one should assume anything put into a chatbot is not just between you and the bot.

As there is nothing running locally on machines, InfoSec teams then have to get creative in managing use, and a majority of teams are unsure how widespread the issue actually is.

Mitigating the risks

As these services are so lightweight, companies are left with few options to mitigate their usage. Of course one could use firewalls to block any network traffic to OpenAI and the like, but as companies look to take advantage of the benefits of this technology, no InfoSec or security team wants to be the department of 'no'. So how can one weed out the potential harmful situations where employees may be putting sensitive information in places they shouldn't from t ..

Support the originator by clicking the read the rest link below.