Questionable AI work habits rampant among US firms

August 15, 2025

A new study from AI security provider CalypsoAI reveals a “growing use and misuse of AI” within US organizations by employees at all levels, including C-suite executives.

Of note, it said, “half (50%) of executives say they’d prefer AI managers over a human, although 34% aren’t entirely sure they can tell the difference between an AI agent and a real employee. Over a third of business leaders (38%) admit they don’t know what an AI agent is — the highest of any role. Almost the same proportion (35%) of C-suite executives said they have submitted proprietary company information so AI could complete a task for them.”

Those findings and others are contained in the firm’s study, The Insider AI Threat Report, which states, “hidden reality inside today’s enterprises: employees at every level are misusing AI tools, often without guilt, hesitation, or oversight.”

For many, it is fine to break rules

The survey of more than 1,000 US workers revealed that, overall, 45% of employees say they trust AI more than their co-workers, 52% of employees would use AI to make their job easier, even if it violates company policy, and 67% of executives say they’d use AI even if it breaks the rules.

The misuse, said the Dublin-based company in the release, even extends to “highly regulated industries” and includes:

60% of respondents from the finance industry admitting to violating AI rules, with an additional one-third saying they have used AI to access restricted data.

42% of employees in the security industry knowingly using AI against policy, and 58% saying they trust AI more than they do their co-workers.

A mere 55% of workers in the healthcare industry following their organization’s AI policy, and 27% saying they “would rather report to [AI] than a human supervisor.”

Asked what prompted the study, CalypsoAI CEO Donnchadh Casey said via email on Friday, “we wanted hard data on what is happening inside enterprises with AI adoption. External threats often get the attention, but the immediate and faster-growing risk is inside the building, with employees at all levels using AI without oversight. Our customers are already telling us they are seeing this risk grow. The research confirms it.”

‘Shadow AI now the new shadow IT’

He said his initial reaction to the findings as they began to pour in, especially when it comes to C-suite leaders’ habits with AI, was that it was surprising to see how quickly the C-suite is bypassing its own rules.

Senior leaders, said Casey, “should set the standard, yet many are leading the risky behavior. In some cases, they are adopting AI tools and agents for business tasks faster than the teams responsible for securing them can respond. Our customers see the same pattern across industries, which is why this is as much a leadership challenge as it is a governance challenge.”

Justin St-Maurice, technical counselor at Info-Tech Research Group, said, “Shadow AI has become the new shadow IT. Employees are using unsanctioned tools to get real work done because AI can deliver two things they actually feel: Cognitive offload takes the drudge work off of their plates, and cognitive augmentation is helping them to think, write, and analyze faster.”

CalypsoAI’s numbers, he said, “show how strong that pull is. Their data shows that more than half of workers say they would use AI even if their organization’s policy says no, a third have already used it on sensitive documents, and almost half of surveyed security teams admitted to having pasted proprietary material into public tools. I’m not sure it’s as much about disloyalty as it is about how governance and enablement lag behind how people work today.”

The risk here is clear, added St-Maurice, because every unmonitored prompt can lead to intellectual property, corporate strategies, sensitive contracts, or customer data leaking out to the public. “And naturally,” he noted, “if IT blocks these AI services, it’ll drive users further underground to look for new ways to access them. The practical fix is through structured enablement.”

A proper strategy, he said, is to provide a sanctioned AI gateway, connect it to identity, log prompts and outputs, apply redaction for sensitive fields, and publish a few clear and plain rules that people can remember. This should be paired with short, role-based training and a catalog of approved models and use cases. This gives employees a safe path to the same gains.

Casey agreed, noting that any solution geared toward correcting the problem of unauthorized AI use must address both people and technology.

“Many enterprises’ initial reaction is to block AI entirely, but this is counterproductive, as employees often circumvent rules to capture AI productivity gains,” he said. “A better approach is to give access to AI across the organization, but monitor and control this access to step in when behavior deviates from policy.”

This, he said, means organizations should have clear, enforceable policies paired with real-time controls that secure AI activity wherever it happens, which includes oversight of AI agents used for business tasks that can operate at scale and touch sensitive data.

“By securing AI where it is deployed and doing real work, enterprises can allow its use without losing visibility or control,” he said.

The survey was conducted in June by research firm Censuswide, who surveyed 1,002 full-time office workers in the US, aged 25-65.

Source:: Computer World

REGISTER NOW FOR YOUR PASS
 
To ensure attendees get the full benefit of an intimate technology expo,
we are only offering a limited number of passes.
 
Get My Pass Now!