Anticipating practitioner questions on closed semantics, LLMs, and the philosophical sources behind the framework.
A recent blog post argued that AI agents need to operate under formal semantic closure. I disagree, and the companion articles develop why. These notes address the objections that position is likely to raise: on closed semantics, on what LLMs actually do, and on the philosophical sources behind the framework. They are for readers who want the full argument rather than just the conclusions.
The articles argue against closed semantics. But don't some systems have legitimately closed semantics? When does closure actually apply?
The articles argue against domain semantic closure, not against formal semantic closure. These are different things, and conflating them is the objection most likely to arise in practice.
Formal semantic closure is real and useful. Within a defined formal system, operators and constructs have precise, deterministic meanings. SQL JOIN, lambda calculus, propositional logic, type systems, and communication protocols all have closed semantics within their formal boundaries. This is necessary for computation.
Domain semantic closure is a different demand. When someone asks "what does Customer mean?" they are not asking about an operator in a formal system. They are asking how a term in their model relates to phenomena in their domain. That interface between formal system and world cannot be closed by the formal system itself.
The confusion is assuming that using a formally closed system like OWL or RDF gives you closed domain semantics. It does not. The formal system gives you closed operator semantics. It does not give you closed meaning for the terms you put into it.
Formal semantic closure legitimately applies to:
The argument in the companion articles is not that closed semantics is bad; it is that closed semantics at the domain level is not fully achievable. Formal closure works because operators like SQL JOIN have precise, deterministic implementations that never change. Applying that standard to all domain concepts would require enterprises to define their own operators at the same level of precision. Some aspects of organizational behavior can be formalized; Dietz's action rules do exactly that, but only for the essential and stable aspects of an organization, and even those include explicit checkpoints where human judgment is required. Extending that formalization beyond its natural boundaries is where governance efforts produce definitions nobody uses and systems that don't mirror how organizations actually work in practice.
One important instance of legitimate formal closure deserves explicit mention: the Closed World Assumption in database query evaluation. When a relation is defined by a predicate and a tuple is absent from that relation, the CWA licenses the conclusion that the corresponding proposition is false, not unknown, not unrecorded, but false within the scope of the model. This is a formal inference rule, not a philosophical claim about the completeness of the world. The "world" being closed is precisely the world the predicate describes, nothing more. This is why database constraint enforcement works: mandatory roles, uniqueness, exclusion constraints all depend on being able to conclude something from absence. The companion piece on the Closed World Assumption addresses this directly.
But our operational systems work fine. Everyone agrees what the terms mean. Haven't we achieved closure?
What you have achieved is local convergence, not absolute closure. This is real and valuable.
Within a bounded context (a product, an organization, an operational system) meaning does converge over time. Systems get debugged. Edge cases get handled. Disputes get resolved. Shared understanding develops. After enough iteration, certain parts become stable enough that they function as if closed.
What the articles deny is that you can achieve closure prior to use, or that formal systems guarantee closure at the domain level. Closure emerging from sustained practice is exactly what you would expect. It is also why governance matters: local convergence is not a permanent achievement but a temporarily stable state that must be maintained.
What about design patterns like Type 2 Slowly Changing Dimensions? Do they have semantics?
Yes, and the example is worth unpacking.
At the formal level, a Type 2 SCD has clear semantics: surrogate keys, effective/expiration dates, a current flag, insert-only behavior for changes. You can specify this precisely.
At the conceptual level, a Type 2 SCD encodes a commitment about how you represent entity identity and change over time. In ORM terms, you might express this as a fact type like "Employee had Salary during Period" where Period is a value type with start and end dates. The Type 2 SCD is one implementation pattern for that conceptual distinction.
So when people talk about "the semantics of a Type 2 SCD," they are usually referring to both: what the pattern does technically (formal), and what conceptual commitment it enables (domain). The formal semantics of the pattern is closed. The domain semantics of how you apply it remains open and context-dependent.
In relational theory, we say a relation has an intension (the schema or heading) and an extension (the population). Doesn't that mean the relational model has semantics built in?
The terminology comes from logic and philosophy: intension is the meaning or definition of a concept (what properties define membership); extension is the set of things that fall under the concept.
But the RDBMS does not "have" the intension in the sense of understanding it. The database engine stores tuples and enforces constraints. It has no interpretant in Peirce's sense.
The intension is a formal specification that enables human interpreters to understand what the relation means. It is a sign that points to meaning, not a container that holds meaning. The meaning lives in the people who designed the schema, write the queries, interpret the results, and maintain the shared practices around the system.
The RDBMS preserves the formal structure faithfully. That is valuable. But preserving structure is not the same as possessing meaning. This is consistent with the semantics article's argument: formal artifacts (relation schemas, ORM diagrams, OWL ontologies) are tools that support shared meaning. They do not contain meaning independently of interpretation. There is however one important sense in which a relvar's predicate does carry formal semantic weight: it defines what the relation is closed over. The Closed World Assumption operates over the predicate, not over the world at large, which is precisely why the relational model achieves formal constraint enforcement without claiming to model all of reality.
Don't LLMs demonstrate that machines can understand semantics? They seem to grasp meaning in ways previous systems could not.
LLMs receive meaning from humans at two levels.
At training time, LLMs learn from human-generated content where humans provided the meaning, context, and intent. In Peirce's terms, the training corpus is billions of human interpretants. The LLM learns statistical patterns across those interpretants.
At runtime, designers and developers provide system prompts, instructions, examples, and constraints that frame what the LLM should do. This context shapes the LLM's behavior within a semantic frame that humans constructed. The LLM does not decide what it is for or what it should attend to. Humans specify that.
The outputs look like understanding because they reflect patterns of human understanding, shaped by human-provided context. But the LLM has no grounding in the world; it has never seen a customer, only text about customers. It has no intent; it predicts tokens and it does not want anything. Nothing happens to it when it is wrong, and it cannot be held accountable because it is not a moral agent. The meaning in an LLM's output originates from humans who created the training data, not from the LLM itself.