How semantic misalignment across teams and domains holds back data interoperability, and what it takes to build shared understanding at scale

In our article 2025 data management trends: the future of organizing for value, we identified interoperability as one of the three trends shaping how organizations create value with data in 2025.
But why is interoperability rising to the top of the agenda?
Primarily, data and AI use cases are moving from single-domain problems to cross-domain questions, driven by both offensive (e.g., data-driven strategies, cross-domain AI, data monetization) and defensive (e.g., CSRD and other external reporting requirements) data strategies. Take banking, for example: assessing customer lifetime value might require transaction data from finance, interaction logs from customer service, and risk profiles from compliance. Answering these high-impact, cross-domain questions requires more than just access to each domain’s systems – it demands bringing together data across departments, with consistent meaning and context. That’s where interoperability becomes essential.
We see two main challenges:
- A complex, diverse technology landscape makes it difficult to connect data
- Inconsistent semantics, or a lack of shared definitions, which make it difficult to interpret and use data across domains
While the first challenge is increasingly supported by modern approaches, the second – semantic alignment – remains highly organization-specific, and, in many cases, unresolved.
Recent trends, such as the shift toward domain-based data ownership, inspired by the introduction of the data mesh paradigm, have magnified the problem. Decentralization improves local data quality by giving teams accountability for the data they produce, but it also increases the risk of semantic misalignment between domains. ESG reporting is one example: it requires coordination across operations, HR, and finance, each using their own systems, language and metrics.
That is why modern data and AI use cases demand true data interoperability across the organization – not just technical connection, but a shared understanding. But how do you achieve that? To answer this, we first need to break interoperability down into its key components.
Breaking down data interoperability – and why semantic chaos is often overlooked
Before solving data interoperability, organizations need to recognize the different ways data alignment can break down across domains. Data, like language, is a structured representation of the real world – designed to convey meaning, but always shaped by context. And just as human communication can run into friction across languages or cultures, data integration can break down across systems or domains. These misalignments tend to fall into three categories.
- Technical interoperability: can systems connect at all? To communicate with someone abroad, your devices first need to connect; your phones must be online and your messaging apps must be compatible. In data, different technologies (like databases, APIs or platforms) must be able to establish a connection and exchange information, say, between a relational database and a data lake.
- Syntactic interoperability: can systems interpret the structure of the data? Once you’re connected, you need to use a structure both sides can follow. A French speaker and a Japanese speaker might switch to English to make the conversation possible. In data, this means aligning on formats, schemas (data structure definitions) and naming conventions. For example, one system might label a field client_id while another uses customer_number.
- Semantic interoperability: do systems agree on the meaning of the data? Even when speaking the same language, meaning can still diverge. Take the word football: it means different things in Europe and the US. The term is shared, but the concept isn’t. In data, the same happens when teams use terms like customer or revenue but define them differently depending on their context. Without a shared understanding, the data may appear integrated – but in practice, it creates confusion and leads to misinterpretation.
Of the three interoperability categories, semantic interoperability is the hardest to solve – and yet the most critical when working across domains. Unlike technical and syntactic issues, which are increasingly addressed with modern integration platforms and tooling, semantic alignment remains dependent on shared understanding. It requires agreement on what data actually means across systems, teams and business contexts – which is exactly what makes it so difficult to solve in practice.
There is no single owner of definitions; meaning is distributed across domains. Misalignment often remains invisible until something goes wrong. And it’s not just a technical issue – resolving semantics requires input from the business, coordinated processes, and a mature approach to governance. To illustrate, consider these two real-life examples:
- In our work with a global retail organization, two systems both used the term active customer. On the surface, the data appeared to represent the same concept, but one team had it defined as a customer who logged in recently, while another had it defined as someone who had made a purchase in the past quarter. When combining these data sources without understanding the differences, this results in misleading insights.
- In another client case – a digital automotive retailer – issues arose during financial forecasting when the finance team relied on the commerce team’s definition of stock. While the commercial team defined stock as cars currently available for sale, the finance team was looking for all vehicles on the balance sheet. As a result, the financial projection was made using a fraction of the actual inventory – a semantic mismatch with material consequences.
So, even when organizations succeed in technically connecting their data – through modern platforms or federated query engines like Trino – semantic chaos often persists below the surface. In the worst case, it goes unnoticed, leading to flawed insights and strategic errors. In the best case, it’s recognized but resolving it requires extensive coordination across teams, delaying decisions and slowing down AI development.
That’s why solving semantic interoperability is so essential – and so often overlooked.
Solving semantic interoperability is a strategic investment
Once semantic interoperability is recognized as a critical challenge, the key question becomes: How do you align meaning across the organization?
In practice, we see two common approaches for organizations to manage meaning across teams and domains:
- The first is built on SLA-based standards (which define what data are delivered, how they are structured and how they should be used), where data-producing teams define agreements about how their data is produced, documented and intended to be consumed.
- The second is ontology-based modeling, which introduces a shared, formal (graph-based) model to represent the core business concepts and how they relate, creating a common language for data. Given the goal of cross-domain value creation, only concepts that are reused or interpreted across domains need to be modeled, allowing teams to participate flexibly where needed.
While ontologies require more upfront effort and governance, they provide a strong foundation for reusability, AI enablement and enterprise-wide consistency. For example, they can be used to create structured business context for GenAI agents, which helps address one of the most common fears in enterprise AI adoption: agents making decisions without the right context. A well-designed ontology enables AI systems to reason over business terms, relationships, and constraints – making outputs more aligned, explainable, and trustworthy. (See our blog post on RAG and knowledge graphs for a deeper dive into how ontologies support AI context and semantic reasoning.) Ontologies also open the door to fine-grained access management. By specifying policies on the shared semantic model – rather than hard-coded on files or specific data – organizations can define access controls based on business concepts, roles, or relationships, making it easier to adapt to changing compliance needs or organizational structures.
Solving semantic interoperability, whether through standards or ontologies, starts with understanding that this is not just a technical task. It’s a strategic investment:
- Establishing standards or ontologies takes time and expertise.
- Governance must be driven by value in data consumption, not just technical consistency.
- Leadership must make deliberate choices about which definitions matter, where alignment is needed, and how that alignment will be enforced across the organization.
The takeaway: solve your semantics to enable interoperability at scale
Ultimately, addressing semantic interoperability is a critical step in solving the broader interoperability challenge. Technical connectivity and structural alignment are foundational, but without shared meaning, data remains fragmented, misunderstood, or misused. As data and AI use cases increasingly span teams, systems, and domains, organizations that succeed will be those that treat interoperability, especially semantic alignment, as a capability to actively build, govern, and scale.
What’s next?
In our next post, we will take a closer look at ontology-based modeling in practice, including a real-world example and practical tips for getting started. For now, ask yourself the question: How does your organization define and manage the language your data speaks?
This article was written by Marnix Fetter, data scientist, and Eva Scherders, data & AI engineer at Rewire.