TheMarketingblog

The Enterprise Trust Gap: Why Companies Fear Losing Control of AI

A new set of findings suggests that America’s biggest worry about artificial intelligence isn’t job loss or killer robots. It’s something far more practical: control.

A joint study by Cybernews and nexos.ai, which tracked public concerns from January to October 2025, shows that fears about governance and privacy sit well above worries about redundancy. Searches linked to “control and regulation” scored highest over the ten-month period, with “data and privacy” close behind. Concerns about job displacement came last, despite a year filled with headlines about tech layoffs.

Žilvinas Girėnas, head of product at nexos.ai, says this mirrors what he sees in large organisations.

“Leaders aren’t scared of AI in principle. They’re scared of losing oversight,” he explains. “When teams start using unapproved AI tools, companies can’t track what data is being used or where it ends up. And without that visibility, you can’t manage risk or meet compliance rules.”

Why AI triggers public anxiety

The report taps into a broader rise in AI-related unease. Much of it comes down to how quickly the technology is spreading and how little the average person understands about what goes on behind the scenes. Many advanced models still act like “black boxes”, offering no clear insight into why they produce the answers they do. That lack of transparency fuels fears about losing control.

There’s also the issue of data. AI systems train on vast amounts of information, often scraped from social platforms, browsing behaviour, smart devices and other sources people don’t actively consent to. Add to that the ongoing drumbeat of data breaches, and it’s no surprise that privacy ranks so high as a concern.

Another layer is the rise of deepfakes and synthetic media. When realistic fake content becomes common, trust begins to erode. People worry about what’s real, who to believe, and whether AI will distort public debate. Bias plays into this too; AI systems trained on skewed datasets can replicate and amplify discrimination in areas such as hiring and lending.

Job fears still matter, even if they rank lowest. For many, it’s not just the threat to income. It’s the idea that work tied to thinking and decision-making—a big part of how people define themselves—could be replaced by a machine.

Girėnas says organisations feel a version of this themselves. “When people don’t understand how AI actually works, confidence drops. It slows down adoption. The only way forward is to build a system of trust, and that starts with complete visibility across the AI tools and workflows inside a company.”

What happens when companies lose control

For businesses, the risks aren’t theoretical. McKinsey’s most recent research shows a clear pattern of real-world consequences when AI isn’t governed properly.

Inaccurate outputs
The most common negative impact reported is poor or unreliable results. If staff use “shadow AI” tools—models that haven’t been approved or tested—there’s a higher chance that biased or outright nonsensical content ends up in products, reports or client-facing work.

Cyber-security worries
More than half of organisations using AI are actively trying to manage security threats. Tools that aren’t centrally controlled can leak confidential data or be exploited by attackers.

Intellectual property risks
Feeding proprietary code or strategic plans into public AI models is a growing concern. Companies that use AI heavily (“AI high performers”) report higher rates of IP-related issues when compared with more cautious firms.

Regulatory exposure
With new rules emerging—from GDPR to the EU AI Act—companies need consistent governance. Without it, the risk of fines, investigations and legal action increases. According to the report, 43 per cent of organisations are worried about this.

How businesses can close the trust gap

Girėnas argues that most of this anxiety can be eased if companies take a structured approach. His advice centres on visibility rather than restriction.

1. Centralise governance
Set one clear policy for how AI is used across the entire organisation.

2. Keep humans involved
Adopt a “human-in-the-loop” model so that any AI-generated output is reviewed before it reaches a critical stage.

3. Make governance a board-level issue
AI safety shouldn’t sit quietly in a tech department. Executives need to lead on it.

4. Focus on transparency, not bans
Understand what AI tools staff are using, rather than trying to block everything. People will always find a workaround, so visibility matters more than restriction.