How faithfully the framework uses Peirce and Wittgenstein, and where it departs from them.
A recent blog post argued that AI agents need to operate under formal semantic closure. I disagree, and the companion articles develop why. These notes address the objections that position is likely to raise: on closed semantics, on what LLMs actually do, and on the philosophical sources behind the framework. They are for readers who want the full argument rather than just the conclusions.
Are you claiming these are the correct definitions of knowledge, information, and semantics?
No. The article titles signal this: "What I Mean by Knowledge, Information, and Semantics" and "Intent-Driven AI Delegates." These are working definitions, offered for adoption by others who find them useful.
In Wittgenstein's terms, this is a language game. The framework proposes a vocabulary for a specific purpose: disciplining how we design and govern AI systems. The community for this language game starts with one, but the hope is that others find it useful enough to adopt. If they do, the community grows and the vocabulary stabilizes through shared use. If not, the framework still has value as a coherent position that makes its commitments explicit.
This is not a claim to have discovered what these terms really mean. It is a pragmatic proposal: if you use these distinctions, certain confusions become easier to avoid, and certain design decisions become easier to justify.
The explicit framing is deliberate. Practitioners who disagree can say "I use these terms differently" rather than "you're wrong about what semantics is." That is a more productive conversation.
Some practitioners hold that terms like "semantics" or "knowledge" have fixed, intrinsic meanings and that disagreement is error rather than difference in usage. That position forecloses the conversation before it starts. The framework is offered for practitioners who accept that working definitions can be useful and precise even when the underlying concepts remain contested. That is most of how engineering actually works.
Why do enterprise practitioners tend toward essentialism about meaning, and why is that a problem?
The essentialism encountered in enterprise settings is usually not philosophically grounded. It tends to arise from data modeling traditions that emphasize one definition per term, enterprise architecture standards that try to fix meanings organizationally, and a folk theory that words simply mean what they mean.
The problem is that this position forecloses useful distinctions. The term "rights" can mean human rights, software licensing rights, contractual rights of use, or access rights in a system. Each is a coherent and distinct meaning in its own domain. Insisting that one of these is the correct meaning and the others are errors is not a philosophical position; it is a governance preference stated as if it were one. Language naturally supports multiple coherent meanings for the same term across different contexts, and that is a feature, not a failure.
The framework's position is different: meaning should be fixed where there are real consequences to misunderstanding, where the fixing is actively maintained through use and governance, and no further. That is closer to how language actually works in practice.
The articles draw on Peirce and Wittgenstein. How faithful are you to their positions?
The framework uses their insights selectively. It is a practitioner's synthesis, not an attempt to faithfully interpret their full positions.
From Peirce, the framework takes three things. The triadic model of meaning: sign, object, and interpretant, establishes that meaning requires interpretation; without an interpretant there is no meaning, only structure. From that follows fallibilism: meaning is not fixed in advance but validated through use and correction. And Peirce's acceptance of modeled referents, things that exist because we conceptualize them rather than because they correspond to physical objects, is what makes conceptual schemas practical rather than merely descriptive.
From Wittgenstein, the foundation is "meaning is use." Everything else follows from it: meaning is not a mental state prior to action, it shows up in practice; it cannot be specified in advance and then applied, it must be tested; and there is no final, complete specification that closes it. The anti-closure stance is not a separate claim but a consequence of taking meaning-as-use seriously.
Peirce's "community of inquiry" concept deserves more development than the framework gives it. For Peirce, meaning doesn't just emerge from individual use; it stabilizes through a community's long-run convergence toward truth. The framework gestures at this with the social level of the semiotic ladder but doesn't push it further. On machine understanding specifically, Peirce's distinction between immediate interpretive effects and the "final interpretant" is pointed: the final interpretant is the settled meaning a community of inquiry converges on over time. LLMs produce immediate interpretive effects, but they cannot participate in the long-run convergence that produces final interpretants, because participation requires being a member of a truth-seeking community with stakes in getting things right.
Wittgenstein distrusted systematizing, and the framework invites that objection. The semiotic ladder, the three-layer architecture, and the levels of semantic engagement are all models of practices rather than examinations of them. In Philosophical Investigations he wrote "Don't think, but look!"; the instruction is to attend to actual use in particular cases rather than build theories about use. He would also resist the definitions of "semantics," "knowledge," and "information" offered in the companion article, preferring to see them as words with family resemblances across uses rather than terms with specifiable meanings.
The partial response is that the article titles signal the intention: "What I Mean by Knowledge, Information, and Semantics" is a deliberate move, naming the language game rather than claiming to have found what these terms really mean. Whether that satisfies Wittgenstein's deeper concern about theorizing is a fair question.
The framework accepts this tension. Without a common vocabulary and a structure like the semiotic ladder, explaining these ideas to architects, designers, and governance professionals becomes impractical. The definitions offered are not claims about the essence of meaning. They are pragmatic commitments intended to discipline system design. Wittgenstein might be uneasy with the apparatus, but the underlying instincts, meaning is use, validated in practice, never fully closed, are his.
Peirce's "community of inquiry" and Wittgenstein's "meaning is use" are closer than they might appear. Both ground meaning in social practice rather than private mental states or formal specification. Both reject the idea that meaning can be closed prior to use. The framework sits at this convergence: meaning is social, practical, and fallible.
Neither Peirce nor Wittgenstein was designing enterprise systems or governing AI, and the framework adapts their insights to problems they did not face. Intent becomes a distinct modelable layer between semantics and execution rather than something implicit in code. Delegation is made explicit, with defined scope and revocability, rather than assumed. Human accountability is built in as a requirement, not treated as a limitation to work around. And the levels of semantic engagement in the appendix provide a way to evaluate how AI systems participate in meaning at different degrees of responsibility.
This is applied philosophy, not pure fidelity. The goal is useful guidance for practitioners, informed by but not beholden to any single tradition.