May 1, 2024
Gain Better Visibility of Your Shadow AI in 2024 – No BYO-AI Unless Approved BY-YOU
Meet shadow AI, the bigger and meaner child of shadow IT. AI is a hot topic right now, but it has a bad track record with security.
The best (but most threatening) thing is – it’s almost always accessible for free. It’s the epitome of where technology meets good vs bad, hooking in users of all positions and authorities unaware of the risks involved in return for easy access to fast answers. A dangerous game that can be merciless to those who confuse its charm and start giving away sensitive information. One slip-up and Shadow AI has won.
With new GenAI tools on tap, like ChatGPT, Google Bard, and Microsoft Copilot (the list goes on), how can you gain visibility and ensure staff are using these apps safely?
A quick differentiation between the two: Shadow IT & Shadow AI
Shadow IT is the unsanctioned use of software, hardware, devices, applications and services. Risking the privacy of your company data via undisclosed platforms that aren’t easily tracked down.
Shadow AI is a subset of shadow IT but, ironically, is a bigger risk. It’s the unsanctioned use of generative AI applications, tools and services – designed to absorb and learn everything for future prompts. Any private data you input risks being regurgitated and used by anyone anywhere.
The big questions all businesses should be asking in 2024
- How can we see where unsanctioned GenAI is being used within our company?
- How can we understand where GenAI is used outside our IT governance policy?
- How can we ensure our customer data isn’t ‘accidentally’ leaked via a whim-of-the-moment GPT prompt from a team member?
- How can we be sure our teams are sourcing trusted & credible information for our customers?
- How can we trace hidden GenAI use within our teams?
Gaining visibility starts with understanding company culture
The behaviour of your employees relies on your company culture and the collective understanding of your company values & policies. From our experience, the best way to drive a change in company culture is through discussion and education to raise awareness of the risks involved.
1. Get clear about your POLICIES and GUIDELINES
If you haven't looked at your security policies and guidelines lately, it might be time to dust them off and review them.
- Ensure they are relevant to emerging shadow AI and shadow IT trends.
- Review which AI tools and company technologies you permit for employee use.
- Outline what the problem is and why these policies are in place.
- Format these policies and guidelines clearly to ensure they are easy to read and access.
2. Start the discussion, EDUCATE on threats and risks
In our opinion, the number one rule in culture change is keeping the topic at the forefront of the discussion. The more consistent you are with communication and the more discussions you can spark, the better your chances of increasing awareness will be.
A big part of your discussion should focus on educating your teams about the major risks involved if policies aren't followed and the magnitude of the bigger problem at play.
- Plan your message: “We don’t offer BYO-AI unless it’s been approved BY-US”.
- Get a timeline going – from our own experience, these types of things often have no end date, and there will always be room to educate further. Creating a rough outline of your messaging strategy is key.
- Outline the implications and risks involved with Shadow AI.
- Outline why you are doing this and why it’s important.
3. FINANCIAL CONTROLS for an extra pair of eyes
While it’s tricky to catch shadow AI with such easy access to free tools, there is an opportunity for enterprise AI tools. This is because these enterprise AI tools often come with an annual fee and you'll be able to track usage. You should work closely with your Chief Financial Officer to review invoices from unfamiliar tools and services. If an unexpected bill shows up, you can investigate where in the company it's being used to catch the shadow AI at play.
- Check your bills regularly.
- Who has the payment details?
- Where are the AI tool invoices coming from?
4. TECHNICAL CONTROLS for an extra safeguard
Implementing technical controls to look for and block unsanctioned applications and services at the network level or on the device can be a good safeguard for users while at work, but it’s hard to have every vector visible and protected, especially with the VPN and incognito abilities that come part and parcel for anyone with a laptop or cell phone.
An important part of preparing to use AI in your environment is ensuring you understand where your sensitive and confidential data is stored and properly labelled. It's essential to tag your data so that AI doesn't mistakenly misuse or overshare.
Used appropriately and securely, AI tools can be extremely useful for your team. However, caution must be applied. If you’re looking for more assistance in this area, we’d be happy to provide more guidance.