Odds are that the shadow AI threat is lurking in your organization.
In a recent survey by TELUS Digital, 68 percent of enterprise employees who use generative AI at work admit to using publicly available tools through personal accounts. More than half (57 percent) admit to entering sensitive information into them, including personal data, confidential company information and customer information. Almost a third (29 percent) say their employers have policies prohibiting them from entering sensitive information into gen AI tools.
Gen AI enables employees to automate many routine tasks, find information quickly and jumpstart creative thinking. However, unsanctioned use of publicly available tools increases the threat of data breaches and compliance violations.
Employees aren’t just using ChatGPT to draft emails. Some are using sensitive and proprietary data to train public domain AI models. That information effectively becomes part of the model, creating a new level of risk.
The Roots of Shadow AI
Shadow IT — the creation, acquisition or use of unsanctioned software — has been a problem for years. However, shadow IT usage exploded with the advent of cloud-based services, which enabled users to access applications and services with a few clicks and a credit card. The remote work model also fueled 59 percent growth in shadow IT, according to a CORE research report.
The causes of shadow IT are typically innocent. Employees want to adopt tools that will make their jobs easier, and find that IT approval processes are too slow. Others simply feel more comfortable using their own devices and preferred apps. Many employees use shadow IT due to the perceived inadequacy of sanctioned tools and pressures to become more productive. Often they do so despite knowing the risks.
Similar motivations are driving shadow AI adoption. In the TELUS Digital study, 60 percent of employees said gen AI helps them work faster, and 57 percent said it makes their jobs easier. Half cited the ability to offload repetitive tasks, while 51 percent cited increased creativity.
Understanding Shadow AI Risks
However, shadow AI comes with greater risks. Like shadow IT, it increases the threat of data loss and exposure due to weak security practices and lack of IT oversight and control. Beyond that, it creates a risk of sensitive data being subsumed into AI training models. Publicly traded companies and organizations with significant regulatory requirements could face substantial penalties if private data is entered into publicly available AI tools.
Consumer-grade gen AI tools also come with threats such as information leakage attacks and goal hijacking. Attackers may bypass any guardrails developers have built into the interface or AI model in order to generate malicious content and other harmful output. These types of attacks exploit the model’s inability to distinguish legitimate from illegitimate requests.
Additionally, shadow AI presents a more prosaic risk — inaccurate output. If the model isn’t properly trained or the data used is inaccurate or incomplete, the output cannot be trusted.
How to Combat Shadow AI
Traditional security tools and IT processes aren’t designed to detect shadow AI. As an initial step, organizations should conduct a formal audit using networking monitoring, cloud access security brokers (CASBs) and other tools to identify shadow AI usage. They should also develop procedures for continuous monitoring to detect new shadow AI apps and ensure compliance with security policies.
The audit can also help organizations fine-tune their gen AI policies and training programs to ensure that employees understand the risks involved. Because most employees turn to shadow AI for help with growing workloads, organizations should use the audit to gain an understanding of AI tool gaps and define strategies that enable employees to use AI securely and effectively. Total AI bans only serve to drive AI use further underground.
Data loss prevention (DLP) tools can help prevent employees from entering sensitive data into AI solutions. Administrators can set policies for the types of data to look for and the action the DLP takes when it detects inappropriate activity.
How Technologent Can Help
Technologent has a practice dedicated to AI and a team with deep expertise in developing AI strategies. We recognize that security must be built into every AI initiative, and can help organizations select the right tools to detect threats and minimize the risk of data loss or exposure. Let’s work together to integrate AI into your workflows and leverage its power to boost efficiency and innovation while reducing risk.

July 2, 2025
Comments