Philosophical Alignment and Differences

How faithfully the framework uses Peirce and Wittgenstein, and where it departs from them.

These notes address the philosophical sources behind the two companion articles and where the framework departs from them. They are written for readers who want to understand the intellectual commitments behind the framework, not just apply it. If you are familiar with Peirce, Wittgenstein, or philosophy of language and want to know how faithfully this framework uses them, this is where that question gets answered directly.

Q

Are you claiming these are the correct definitions of knowledge, information, and semantics?

No. The article titles signal this: "What I Mean by Knowledge, Information, and Semantics" and "Intent-Driven AI Delegates." These are working definitions, offered for adoption by others who find them useful.

In Wittgenstein's terms, this is a language game. The framework proposes a vocabulary for a specific purpose: disciplining how we design and govern AI systems. The community for this language game starts with one, but the hope is that others find it useful enough to adopt. If they do, the community grows and the vocabulary stabilizes through shared use. If not, the framework still has value as a coherent position that makes its commitments explicit.

This is not a claim to have discovered what these terms really mean. It is a pragmatic proposal: if you use these distinctions, certain confusions become easier to avoid, and certain design decisions become easier to justify.

The explicit framing is deliberate. Practitioners who disagree can say "I use these terms differently" rather than "you're wrong about what semantics is." That is a more productive conversation.

Some practitioners hold that terms like "semantics" or "knowledge" have fixed, intrinsic meanings and that disagreement is error rather than difference in usage. That position forecloses the conversation before it starts. The framework is offered for practitioners who accept that working definitions can be useful and precise even when the underlying concepts remain contested. That is most of how engineering actually works.

Q

Why do enterprise practitioners tend toward essentialism about meaning, and why is that a problem?

In philosophy of language, Kripke and Putnam argued that certain terms (natural kinds like "water" or "gold") have their meaning fixed by the actual nature of what they refer to. But this is more nuanced than crude essentialism: it applies to specific categories and still involves reference relations to the world and community use.

Modern linguistics has largely moved away from essentialism. Usage-based linguistics, cognitive linguistics, and construction grammar all treat meaning as emergent from use, embodied, and prototype-based rather than fixed and intrinsic.

The essentialism encountered in enterprise settings is usually not philosophically grounded at all. It tends to arise from data modeling traditions that emphasize "one definition per term," enterprise architecture standards that try to fix meanings organizationally, and a general folk theory that words simply mean what they mean.

Concrete example

The term "rights" is sometimes constrained to mean only basic human rights. Uses in software licensing, rights of use, or contractual contexts are then treated as errors rather than as distinct meanings operating in different domains. This forecloses useful distinctions by insisting on a single meaning where language supports several coherent ones.

The irony is that even the positions with essentialist elements, Kripke and Putnam included, end up supporting something closer to the pragmatist view: meaning is fixed by how terms relate to the world and how communities use them over time, not by definitions written down in advance. Enterprise essentialism is usually more rigid than the philosophers it implicitly appeals to.

Q

The articles draw on Peirce and Wittgenstein. How faithful are you to their positions?

The framework uses their insights selectively. It is a practitioner's synthesis, not an attempt to faithfully interpret their full positions.

From Peirce
  • The triadic model of meaning (sign, object, interpretant)
  • The insistence that meaning requires interpretation
  • Fallibilism: meaning is validated through use and correction, not fixed in advance
  • Acceptance of "modeled" referents: things that exist because we conceptualize them
From Wittgenstein
  • "Meaning is use": words acquire meaning through participation in structured activities
  • Rejection of meaning as a mental state prior to action
  • Validation through practice, not prior specification
  • The anti-closure stance: there is no final, complete specification of meaning
Where Peirce might push further

Peirce would likely want a stronger treatment of his "community of inquiry" concept. For Peirce, meaning stabilizes not just through individual use but through a community's long-run convergence toward truth. The framework gestures at this with the "social level" of the semiotic ladder, but does not develop it as fully as Peirce would. On machine understanding, Peirce might also want a firmer stance: he distinguished between immediate interpretive effects and the "final interpretant," the settled meaning within a community of inquiry. He might argue that LLMs produce the former but cannot participate in the latter, because they cannot be members of a truth-seeking community.

Where Wittgenstein might object

Wittgenstein distrusted systematizing. He would likely view the semiotic ladder, the three-layer architecture, and the levels of semantic engagement as exactly the kind of theorizing he warned against. He preferred "don't think, look" — examining actual practices rather than building models of practices. He would also resist defining "semantics," "knowledge," and "information" as if they had essential meanings, preferring to see them as words with family resemblances across uses.

The practitioner's response

The framework accepts this tension. Without a common vocabulary and a structure like the semiotic ladder, explaining these ideas to architects, designers, and governance professionals becomes impractical. The definitions offered are not claims about the essence of meaning. They are pragmatic commitments intended to discipline system design. Wittgenstein might be uneasy with the apparatus, but the underlying instincts — meaning is use, validated in practice, never fully closed — are his.

Where the positions converge

Peirce's "community of inquiry" and Wittgenstein's "meaning is use" are closer than they might appear. Both ground meaning in social practice rather than private mental states or formal specification. Both reject the idea that meaning can be closed prior to use. The framework sits at this convergence: meaning is social, practical, and fallible.

Neither Peirce nor Wittgenstein was designing enterprise systems or governing AI. The framework adapts their insights to problems they did not face:

  • Intent as a distinct modelable layer between semantics and execution
  • Explicit delegation with scope and revocability
  • Human accountability as a design invariant rather than a limitation to overcome
  • A maturity model for evaluating how AI systems participate in meaning at different levels of responsibility

This is applied philosophy, not pure fidelity. The goal is useful guidance for practitioners, informed by but not beholden to any single tradition.