Addressing practitioner objections on closed semantics, LLMs, and the philosophical sources behind the framework.
These notes address questions that come up when practitioners push back on the frameworks in the two companion articles. They are written for readers who want to go deeper on specific distinctions, not as required reading before the main articles. If you are encountering objections in your own organization about closed semantics, what LLMs actually do, or the philosophical sources behind the framework, this is where those objections get addressed directly.
The articles argue against closed semantics. But don't some systems have legitimately closed semantics? When does closure actually apply?
The articles argue against domain semantic closure, not against formal semantic closure. These are different things, and conflating them is the most common objection the framework receives.
Formal semantic closure is real and useful. Within a defined formal system, operators and constructs have precise, deterministic meanings. SQL JOIN, lambda calculus, propositional logic, type systems, and communication protocols all have closed semantics within their formal boundaries. This is necessary for computation.
Domain semantic closure is a different demand. When someone asks "what does Customer mean?" they are not asking about an operator in a formal system. They are asking how a term in their model relates to phenomena in their domain. That interface between formal system and world cannot be closed by the formal system itself.
The confusion is assuming that using a formally closed system like OWL or RDF gives you closed domain semantics. It does not. The formal system gives you closed operator semantics. It does not give you closed meaning for the terms you put into it.
Formal semantic closure legitimately applies to:
These are all cases where meaning is defined within a formal system, not at the interface between the formal system and the world being modeled. The articles should not be read as "closed semantics is bad." They should be read as "closed semantics at the domain level is not achievable, and pretending otherwise causes problems."
But our operational systems work fine. Everyone agrees what the terms mean. Haven't we achieved closure?
What you have achieved is local convergence, not absolute closure. This is real and valuable.
Within a bounded context (a product, an organization, an operational system) meaning does converge over time. Systems get debugged. Edge cases get handled. Disputes get resolved. Shared understanding develops. After enough iteration, certain parts become stable enough that they function as if closed.
This convergence is:
The articles do not deny this. If meaning is validated through use (Wittgenstein), and communities converge toward stable meaning over time (Peirce), then sufficient iteration within a bounded context produces something like local closure.
What the articles deny is that you can achieve closure prior to use, or that formal systems guarantee closure at the domain level. Closure emerging from sustained practice is exactly what you would expect. It is also why governance matters: local convergence is not a permanent achievement but a temporarily stable state that must be maintained.
What about design patterns like Type 2 Slowly Changing Dimensions? Do they have semantics?
Yes, and this example usefully bridges both senses.
At the formal level, a Type 2 SCD has clear semantics: surrogate keys, effective/expiration dates, a current flag, insert-only behavior for changes. You can specify this precisely.
At the conceptual level, a Type 2 SCD encodes a commitment about how you represent entity identity and change over time. In ORM terms, you might express this as a fact type like "Employee had Salary during Period" where Period is a value type with start and end dates. The Type 2 SCD is one implementation pattern for that conceptual distinction.
So when people talk about "the semantics of a Type 2 SCD," they are usually referring to both: what the pattern does technically (formal), and what conceptual commitment it enables (domain). The formal semantics of the pattern is closed. The domain semantics of how you apply it remains open and context-dependent.
In relational theory, we say a relation has an intension (the schema or heading) and an extension (the population). Doesn't that mean the relational model has semantics built in?
The terminology comes from logic and philosophy: intension is the meaning or definition of a concept (what properties define membership); extension is the set of things that fall under the concept.
But the RDBMS does not "have" the intension in the sense of understanding it. The database engine stores tuples and enforces constraints. It has no interpretant in Peirce's sense.
The intension is a formal specification that enables human interpreters to understand what the relation means. It is a sign that points to meaning, not a container that holds meaning. The meaning lives in:
The RDBMS preserves the formal structure faithfully. That is valuable. But preserving structure is not the same as possessing meaning. This reinforces the Article 1 position: formal artifacts (relation schemas, ORM diagrams, OWL ontologies) are tools that support shared meaning. They do not contain meaning independently of interpretation.
Don't LLMs demonstrate that machines can understand semantics? They seem to grasp meaning in ways previous systems could not.
LLMs receive meaning from humans at two levels.
At training time, LLMs learn from vast amounts of human-generated content where humans provided the meaning, context, and intent. In Peirce's terms, the training corpus is billions of human interpretants. The LLM learns statistical patterns across those interpretants.
At runtime, designers and developers provide system prompts, instructions, examples, and constraints that frame what the LLM should do. This context shapes the LLM's behavior within a semantic frame that humans constructed. The LLM does not decide what it is for or what it should attend to. Humans specify that.
The outputs look like understanding because they reflect patterns of human understanding, shaped by human-provided context. But the LLM itself:
The LLM is a sophisticated mirror of human semantic behavior. But mirroring is not the same as possessing. The meaning in an LLM's output originates from humans who created the training data, not from the LLM itself.
Could AI achieve true understanding in the future?
This is a harder question, and there are several philosophical positions:
The prototype at knowledge-foundation.ai takes position 4 seriously: it does not depend on resolving whether LLMs understand, because the governance layer works the same way regardless.
For the purposes of the two framework articles, this is the most useful position. Even if AI achieved genuine understanding in the future, you would still want:
These are organizational and social requirements, not cognitive ones. The Intent-Driven AI Delegates framework is robust to the metaphysical question either way. Whether or not AI "truly understands," the questions of who is responsible, who has authority, and how intent is specified remain.