1 / 13

Knowledge Interoperability in Practice

Choosing the Right Tool for the Problem
By G. Sawatzky, knowledge-foundation.ai  |  2025
"We need an ontology" is often a bundled answer to four quite different problems. This deck separates them and asks which tool actually delivers the claimed benefit for each one.

The Recurring Promise

Ontologies are regularly proposed as the solution to enterprise knowledge and AI problems. The claimed benefits appear in the literature and in vendor pitches:

Machine-interpretable meaning

Formal semantics that software can reason over without human mediation.

Shared concepts

A common vocabulary that all systems and participants agree on.

Cross-system interoperability

Ability to exchange and interpret data across organisational boundaries.

This deck examines each claim against specific enterprise problems and asks which tool actually delivers it.

A Note on Terminology

Before examining the four problems, it is worth naming a pattern that runs through much of the literature.

This deck uses ontology in the conceptual sense: a claim about what a domain actually is - independent of how that claim is expressed. An ORM model, an OWL file, or a DEMO analysis can each be ontological in this sense. The question worth asking is always what the approach claims about reality, not which notation it uses.

Four Distinct Problems

"We need an ontology" often bundles four different problems. Each has a different solution.

1 Capturing domain knowledge Facts, rules, and constraints expressed with precision and shared understanding. → Typical fit: ORM 2 AI access to authoritative facts Agents querying current, reliable data at inference time. → Typical fit: ORM + GraphQL / REST 3 Mapping concepts across systems Aligning terms with different labels but overlapping meanings. → Typical fit: AI-assisted similarity mapping 4 Decentralised open-world integration Multi-contributor knowledge from independent sources worldwide. → Typical fit: Semantic Web stack (OWL/RDF)

Problem 1: Capturing Domain Knowledge

What ORM Already Provides

The properties most often claimed as ontology advantages are already present in a well-built ORM model.

  • Formal, constraint-rich models in predicate logic
  • Natural-language verbalizations domain experts can validate
  • Implementation-independent; describes reality, not storage
  • N-ary relationships, ring constraints, derivation rules
  • Interactive constraint checking during model construction
  • Precise enough to formalise DEMO enterprise ontology results
Example verbalization:
Employee works for Company.
Employee uses Machine on Project.
Each Employee works for exactly one Company.

Problem 2: AI Access to Authoritative Facts

Agents should query structured facts at inference time rather than relying on embedded or stale knowledge. Authoritative here means governed and schema-controlled.

ORM Conceptual Layer verbalizations JSON-LD, OWL… GraphQL / REST Typed API Layer contracts + permissions queries fresh facts at runtime LLM Agent At Inference Grounded Answer with provenance Note on OBDA: Sequeda's OBDA pattern (OWL + SPARQL over relational data) is reasonable when bridging enterprise data into a larger decentralised integration - a Problem 4 context. For bounded enterprise use alone, GraphQL / REST over a well-modelled schema avoids importing open-world assumptions the data does not have.

Problem 3: Cross-System Concept Mapping

Why universal ontologies tend to fail here, and what Wittgenstein helps explain about it.

The "Customer" Problem

  • Sales: Any prospect or account in CRM
  • Finance: Any entity with a billing record
  • Legal: Any party to a signed contract
  • Support: Any user who has opened a ticket

What a universal ontology tries

Force a single definition. Either choose one department's version or create an abstract supertype that satisfies no one and adds maintenance overhead.

What family resemblance suggests: These definitions overlap enough that we recognise them as the same concept in context, without forcing them into a single mould.
"If you look at [games] you will not see something that is common to all, but similarities, relationships, and a whole series of them at that." — Wittgenstein, Philosophical Investigations §66

AI-Assisted Boundary Mapping

LLMs often connect concepts through usage and context rather than by checking necessary and sufficient conditions explicitly.

1. Model precisely with ORM Constraints, verbalizations, fact types 2. Export JSON + verbalizations Publish conceptual elements 3. AI proposes mappings Context-driven similarity scoring 4. Expert review + codify Accepted mappings are auditable Properties: Precision inside (ORM) · Flexibility outside (AI similarity) · Proposals explainable and subject to review · Accepted mappings codified and monitored
This workflow draws on validated components: LLM semantic alignment, soft alignment as a strategy (Li et al. 2025), and human-in-the-loop review. End-to-end evaluation of the specific combination is ongoing.

Problem 4: Decentralised Open-World Integration

This is where the Semantic Web stack earns its keep.

Characteristics of this problem

  • Contributors are distributed; no single authority controls the schema
  • Facts arrive incrementally from independent sources
  • The open world assumption is the correct default: absence of a fact means unknown, not false
  • Stable, globally resolvable identifiers (URIs) are essential for shared reference

Where this actually occurs

  • Biology databases (genes, proteins, organisms)
  • Academic linked data across institutions
  • Scientific nomenclature (species, compounds)
  • Government open data initiatives
Most enterprise problems do not look like this. Importing the open world assumption into a context that does not need it adds complexity without adding value. The Semantic Web stack is the right tool for the right problem, not a universal upgrade.

Why Universal Ontologies Struggle in Enterprise

The hidden assumption: Top-down ontology engineering assumes that enterprise concepts have clean necessary and sufficient conditions, and that there is a correct definition waiting to be found and agreed upon. Wittgenstein's insight is that this is often false for real business concepts.
Customer

Sales, finance, legal, and support each have a working definition that overlaps with but does not match the others.

Employee

Full-time staff, contractor, secondee, and intern share family resemblance but not a common essential nature.

Product

Physical goods, digital licences, subscriptions, and professional services are all called products in most catalogues.

Note: This argument applies specifically to business concepts. Formally closed domains such as chemical nomenclature, mathematical objects, and physical constants can support top-down ontology engineering precisely because their definitions are necessary and sufficient. Those domains also tend to be the scientific cases where Problem 4 tooling is appropriate.

The Architecture That Follows

Each layer doing the work it is actually suited for.

ORM - Conceptual Core

Formal, constraint-rich, verbalizable models. Implementation-independent. Can be exported to JSON-LD or other encodings when needed.

GraphQL / REST - Typed API Layer

Interfaces for AI agents. Fresh authoritative facts at inference time. Contracts, permissions, provenance. No open-world assumptions imported.

AI Similarity - at the Boundary

LLM-assisted concept mapping where systems meet. Proposals reviewed by experts. Accepted mappings codified and monitored.

Semantic Web Stack - where it fits

Decentralised multi-contributor environments where the open world assumption is appropriate. Biology, academic linked data, scientific nomenclature.

Choosing Your Approach

Your problem Recommended approach
Capturing facts, rules, and constraints of a domain with precision Start with ORM. More expressive than most practitioners realise. Grounded in natural language. No unnecessary assumptions.
AI agents need access to current, authoritative facts at inference time ORM + GraphQL or REST. Agents query structured facts directly. Avoids embedded or stale knowledge.
Mapping concepts across systems with different labels but overlapping meanings ORM internally, AI similarity at the boundary. Expert review to codify accepted results. Do not force a universal schema.
Integrating knowledge from independent contributors with no central schema authority Semantic Web stack. RDF, OWL, and the open world assumption were designed for this problem. Specific and important, but not universal.

References and Related Articles

Related Articles

References

  • Wittgenstein, L. (1953). Philosophical Investigations.
  • Halpin, T.A. and Morgan, T. (2008). Information Modeling and Relational Databases.
  • Dietz, J.L.G. (2006). Enterprise Ontology. Springer.
  • Feilmayr & Wöß (2016). Data & Knowledge Engineering. DOI: 10.1016/j.datak.2015.11.003
  • Date, C.J. (2015). Logic and Relational Theory.
  • Guarino, N. (1998). Formal Ontology in Information Systems. FOIS.
  • Gruber, T.R. (1993). A Translation Approach to Portable Ontologies.
  • Li, Z. et al. (2025). LLM-Assisted Semantic Alignment… IEEE ISSE 2025. arXiv:2508.16181
  • Zhang, Q. et al. (2024). LLM-Align. arXiv:2412.04690
  • Sequeda, J.F. & Miranker, D.P. (2013). Ultrawrap. J. Web Semantics, 22:19–36.
G. Sawatzky  |  knowledge-foundation.ai  |  2025