"We need an ontology" is often a bundled answer to four quite different problems. This deck separates them and asks which tool actually delivers the claimed benefit for each one.
The Recurring Promise
Ontologies are regularly proposed as the solution to enterprise knowledge and AI problems. The claimed benefits appear in the literature and in vendor pitches:
Machine-interpretable meaning
Formal semantics that software can reason over without human mediation.
Shared concepts
A common vocabulary that all systems and participants agree on.
Cross-system interoperability
Ability to exchange and interpret data across organisational boundaries.
This deck examines each claim against specific enterprise problems and asks which tool actually delivers it.
A Note on Terminology
Before examining the four problems, it is worth naming a pattern that runs through much of the literature.
OWL, RDF, and SPARQL are valuable tools. The Semantic Web research tradition has produced a well-developed stack with real results in the domains it was designed for. That work deserves respect on its own terms.
In practice, "ontological" has come to mean "uses OWL/RDF/SPARQL". The notation has become a proxy for the concept. A project is described as ontology-based because it uses these tools, not because it makes a particular kind of claim about domain reality. This conflation appears in research literature, vendor positioning, and project proposals alike. (Feilmayr & Wöß, 2016)
This deck uses ontology in the conceptual sense: a claim about what a domain actually is - independent of how that claim is expressed. An ORM model, an OWL file, or a DEMO analysis can each be ontological in this sense. The question worth asking is always what the approach claims about reality, not which notation it uses.
Four Distinct Problems
"We need an ontology" often bundles four different problems. Each has a different solution.
Problem 1: Capturing Domain Knowledge
What ORM Already Provides
The properties most often claimed as ontology advantages are already present in a well-built ORM model.
Formal, constraint-rich models in predicate logic
Natural-language verbalizations domain experts can validate
Implementation-independent; describes reality, not storage
N-ary relationships, ring constraints, derivation rules
Interactive constraint checking during model construction
Precise enough to formalise DEMO enterprise ontology results
Example verbalization: Employee works for Company. Employee uses Machine on Project. Each Employee works for exactly one Company.
Problem 2: AI Access to Authoritative Facts
Agents should query structured facts at inference time rather than relying on embedded or stale knowledge. Authoritative here means governed and schema-controlled.
Problem 3: Cross-System Concept Mapping
Why universal ontologies tend to fail here, and what Wittgenstein helps explain about it.
The "Customer" Problem
Sales: Any prospect or account in CRM
Finance: Any entity with a billing record
Legal: Any party to a signed contract
Support: Any user who has opened a ticket
What a universal ontology tries
Force a single definition. Either choose one department's version or create an abstract supertype that satisfies no one and adds maintenance overhead.
What family resemblance suggests: These definitions overlap enough that we recognise them as the same concept in context, without forcing them into a single mould.
"If you look at [games] you will not see something that is common to all, but similarities, relationships, and a whole series of them at that." — Wittgenstein, Philosophical Investigations §66
AI-Assisted Boundary Mapping
LLMs often connect concepts through usage and context rather than by checking necessary and sufficient conditions explicitly.
This workflow draws on validated components: LLM semantic alignment, soft alignment as a strategy (Li et al. 2025), and human-in-the-loop review. End-to-end evaluation of the specific combination is ongoing.
Problem 4: Decentralised Open-World Integration
This is where the Semantic Web stack earns its keep.
Characteristics of this problem
Contributors are distributed; no single authority controls the schema
Facts arrive incrementally from independent sources
The open world assumption is the correct default: absence of a fact means unknown, not false
Stable, globally resolvable identifiers (URIs) are essential for shared reference
Where this actually occurs
Biology databases (genes, proteins, organisms)
Academic linked data across institutions
Scientific nomenclature (species, compounds)
Government open data initiatives
Most enterprise problems do not look like this. Importing the open world assumption into a context that does not need it adds complexity without adding value. The Semantic Web stack is the right tool for the right problem, not a universal upgrade.
Why Universal Ontologies Struggle in Enterprise
The hidden assumption: Top-down ontology engineering assumes that enterprise concepts have clean necessary and sufficient conditions, and that there is a correct definition waiting to be found and agreed upon. Wittgenstein's insight is that this is often false for real business concepts.
Customer
Sales, finance, legal, and support each have a working definition that overlaps with but does not match the others.
Employee
Full-time staff, contractor, secondee, and intern share family resemblance but not a common essential nature.
Product
Physical goods, digital licences, subscriptions, and professional services are all called products in most catalogues.
Note: This argument applies specifically to business concepts. Formally closed domains such as chemical nomenclature, mathematical objects, and physical constants can support top-down ontology engineering precisely because their definitions are necessary and sufficient. Those domains also tend to be the scientific cases where Problem 4 tooling is appropriate.
The Architecture That Follows
Each layer doing the work it is actually suited for.
ORM - Conceptual Core
Formal, constraint-rich, verbalizable models. Implementation-independent. Can be exported to JSON-LD or other encodings when needed.
GraphQL / REST - Typed API Layer
Interfaces for AI agents. Fresh authoritative facts at inference time. Contracts, permissions, provenance. No open-world assumptions imported.
AI Similarity - at the Boundary
LLM-assisted concept mapping where systems meet. Proposals reviewed by experts. Accepted mappings codified and monitored.
Semantic Web Stack - where it fits
Decentralised multi-contributor environments where the open world assumption is appropriate. Biology, academic linked data, scientific nomenclature.
Choosing Your Approach
Your problem
Recommended approach
Capturing facts, rules, and constraints of a domain with precision
Start with ORM. More expressive than most practitioners realise. Grounded in natural language. No unnecessary assumptions.
AI agents need access to current, authoritative facts at inference time
ORM + GraphQL or REST. Agents query structured facts directly. Avoids embedded or stale knowledge.
Mapping concepts across systems with different labels but overlapping meanings
ORM internally, AI similarity at the boundary. Expert review to codify accepted results. Do not force a universal schema.
Integrating knowledge from independent contributors with no central schema authority
Semantic Web stack. RDF, OWL, and the open world assumption were designed for this problem. Specific and important, but not universal.