Reinventing Model Communication with Gen AI

By G. Sawatzky, knowledge-foundation.ai

Originally published: August 11, 2025

View Slides: Reinventing Model Communication (Slide Deck)

Most modeling teams still communicate primarily through diagrams. Diagrams are useful, but they are only partial representations of a model. They show shape better than they show meaning, and they usually show structure better than they show constraints, scope, or consequences. That gap matters. Many disagreements that surface late in implementation are not disagreements about notation. They are disagreements about what the model actually says.

This is where formal model verbalization becomes valuable. A formal model can often be rendered into disciplined natural language that makes its conceptual distinctions, constraints, and implications easier to inspect. Done well, that verbalization does not replace the model. It makes the model more legible to the people who need to review, validate, and govern it.

Generative AI creates a new opportunity here, but only if we place it carefully. The strongest use is not to let an LLM invent the model or act as the authority on domain meaning. The stronger use is to let the LLM translate a formally grounded model into explanations, narratives, summaries, and answers that remain anchored in the underlying conceptual structure.

The Model Is Primary, the Diagram Is Secondary

A conceptual model is not the picture on the screen. It is the structured set of distinctions, relationships, constraints, and rules that define what can be said about a domain. A diagram may help communicate that structure, but the diagram is not the structure itself.

This distinction is easy to lose in practice. Two people can look at the same box-and-line diagram and infer different cardinalities, different preconditions, or different consequences of a rule. A line between Customer and Order does not by itself explain whether every order must have exactly one customer, whether cancelled orders remain in scope, or whether historical records can be reassigned. Those are modeling commitments, not drawing conventions.

Formal modeling methods matter because they make those commitments explicit. In Object-Role Modeling, for example, verbalizations and constraints are not decorative additions to a picture. They are part of how the model becomes reviewable, testable, and fit for implementation.

What Formal Model Verbalization Gives You

Formal model verbalization is the disciplined rendering of a model into natural language while preserving its conceptual structure. The goal is not literary elegance. The goal is inspectability.

A good verbalization helps a reviewer answer questions such as:

That is why verbalization is useful across different audiences. Domain experts can inspect whether the conceptual distinctions are right. Architects can inspect whether the model is implementation-ready. Governance and compliance reviewers can inspect whether the stated rules and responsibilities align with policy. The same formal model supports all three, but not everyone will inspect it through the same notation.

From Verbalization to Model-Grounded Explanation

Verbalization is the starting point, not the endpoint. Once a model is formally specified, you can generate explanations that are grounded in the model rather than improvised around it.

A model-grounded explanation does at least three things:

  1. It stays tied to explicit model elements. Concepts, relationships, constraints, and rules should be identifiable in the explanation.
  2. It supports question-driven inspection. A reviewer should be able to ask what a rule means, what assumptions it depends on, and what follows if a condition changes.
  3. It preserves traceability. The explanation should point back to the model, not become a free-floating narrative with no accountable source.

This opens up more useful forms of communication than a static diagram alone. A system can explain a local portion of the model, summarize the effect of a constraint, compare two related concepts, or walk a reviewer through a scenario. The important point is that these explanations are grounded in the model's formal content, not merely inspired by its theme.

LLMs as Translators of Conceptual Structure

This is where LLMs can help. They are good at translating between forms of expression. They can take structured inputs and produce different kinds of outputs: concise summaries, question-and-answer responses, walkthroughs, or role-specific explanations. That makes them useful as translators of conceptual structure.

But the role needs to be defined precisely. An LLM should not be treated as the authority on what the model means. It should not silently invent missing distinctions, soften hard constraints, or reconcile inconsistencies by guessing. Those are modeling and governance decisions, not language-generation tasks.

A safer and more valuable role is this:

In that arrangement, the LLM is not replacing formalism. It is making formalism more accessible.

Why This Matters in Practice

Most organizations do not fail because they lacked a diagram. They fail because critical distinctions were never made explicit, or because the people reviewing the model could not see what the formal structure actually committed them to. When that happens, errors survive all the way into implementation, where they become expensive to unwind.

Model-grounded explanation helps in at least four practical situations:

That is the real promise here. Not a replacement for modeling discipline, but a better communication layer around it.

What Good Tool Support Should Do

If a modeling tool uses LLMs in this area, the design target should be modest and disciplined:

Those capabilities are enough to make formal models more usable for more people. They do not require speculative interface claims. They require disciplined translation from formal structure into accountable explanation.

Limits

Not every explanation generated by an LLM is trustworthy merely because a model exists somewhere in the background. The grounding has to be real. If the model is incomplete, contradictory, or poorly factored, an LLM may still produce fluent but misleading prose. The answer is not to abandon the approach. It is to insist on explicit linking between explanation and model source.

This is also why formal verbalization still matters on its own. Even without an LLM, the discipline of rendering a model into inspectable language improves review quality. LLMs extend that capability. They do not remove the need for it.

Conclusion

The strongest use of Gen AI in model communication is narrower than many current claims suggest, and more useful for that reason. The point is not to let an LLM replace formal modeling, nor to turn every model into a speculative multimedia experience. The point is to use language models to make formal conceptual structure easier to inspect, explain, and question.

That means treating diagrams as aids, not as the model itself; treating verbalization as a serious modeling capability, not a cosmetic feature; and treating LLMs as translators of conceptual structure, not as authorities on domain meaning. If we do that, we get a better communication layer around formal models, and that in turn makes modeling more useful for design, validation, and governance.

References

Selected resources supporting the direction discussed in this article: