A Bergen workshop revealed a blind spot in AI Governance

Speednet Nordics
November 4, 2025 | AI

During the last week of October in Bergen, a group of leaders from banking, fintech, and insurance gathered for a hands-on session titled Lunsj & Læring: AI Compliance i praksis. The event was hosted by Finance Innovation Norway, in partnership with PwC, Speednet and Dark Horse.

The focus was clear: how can companies in the Nordics move from static AI policies to active oversight? To ground the discussion, we ran a live survey with the participants. The results showed promising maturity in policy development but exposed a critical blind spot in monitoring real-world AI behavior.

Norwegian companies are writing the rules, but missing the alarms

More than 65 percent of respondents said their company has an internal AI policy or governance framework. That is a strong signal. It shows that Norwegian firms understand the importance of proactive AI governance.

Lunsj & Læring_presentation

But when asked whether they could immediately detect if one of their AI systems silently failed, only 17 percent said yes.

This reveals a dangerous gap. Policies are in place, but if a model begins producing flawed or biased results, most companies would not know until damage is done.

AI incidents will not just cost you fines, they will cost you trust

In the same survey, 83 percent said AI risk is not just about avoiding fines. It is about preserving trust.

This trust challenge is both external and internal. Externally, a chatbot that gives bad advice or a model that quietly discriminates can break customer loyalty in days. Internally, the stakes are just as high. Teams across the business may see AI’s potential, but the moment an AI system goes off course, it can trigger internal resistance, delay adoption, or create a compliance block that stalls innovation altogether.

Once confidence inside the organization is lost, AI becomes a risk rather than a competitive advantage.

Few companies know when their AI is misbehaving

Most companies in the room are moving in the right direction at the policy level. But almost none had real-time monitoring or behavior tracking in place.

That means when something does go wrong — and it will — most will not know until consequences are visible, costly, and public.

This brings us to one of the most important questions in the survey:
“How likely do you think it is that your organization could face an AI incident?”

Answers clustered in the mid-range. No one ruled it out. That is a good sign. But in reality, the question is not about if AI will make mistakes — it is when.

Recent studies support this. A 2024 report from Stanford’s Institute for Human-Centered AI found that every production-level AI system tested over 12 months exhibited at least one behavioral drift or performance degradation. These ranged from subtle data bias to complete misalignment with business intent.

If your AI system is not being monitored continuously, that drift will happen in the dark.

What is slowing progress on AI Governance?

The open answers in our survey reveal some shared pain points:

  • Lack of clear ownership for AI oversight
  • Shortage of tools designed for AI-specific risks
  • Pressure on time and budget
  • Uncertainty about how to “audit” AI systems in practice

One participant captured the tension well: “AI is making the industry super risk-averse.” That resistance is not because AI is inherently dangerous. It is because organizations cannot yet prove that their systems are behaving as expected.

Policy is not enough: Nordic companies need control

The companies at the Bergen session are forward-looking. Many already have governance structures in place. They recognize AI as a high-priority risk.

But real governance means going beyond documentation. It means monitoring, testing, alerting, and proving that your AI works the way it should — even as it learns, adapts, and evolves.

This is why Speednet built AI Auditor. It helps companies close the visibility gap by continuously monitoring AI behavior, alerting teams to risk, and documenting compliance with both internal policies and legal frameworks like the EU AI Act.

Final word: fix the blind spot before it hurts your business

The Bergen workshop showed that Nordic firms are not behind. In fact, many are ahead of their European peers in policy and awareness. But awareness is not control. If you cannot see what your AI is doing, you cannot govern it.

Lunsj & Læring_presentation2

The next step is clear. Governance must extend from strategy to system behavior. And that starts with visibility — before it starts costing trust, customers, or innovation.

This blog post was created by our team of experts specialising in AI Governance, Web Development, Mobile Development, Technical Consultancy, and Digital Product Design. Our goal is to provide educational value and insights without marketing intent. 

If you want to meet us in person, click here and we’ll get in touch!