Reinventing Model Communication with Gen AI
By G. Sawatzky, knowledge-foundation.ai
Sept 2025
Why Diagrams Are Not the Model
- Diagrams show shape better than they show constraints and consequences.
- Different readers infer different commitments from the same picture.
- Review improves when the formal model can also be rendered in disciplined language.
Formal Model Verbalization
- Render formal structures into disciplined natural language.
- Make fact types, constraints, and distinctions reviewable.
- Use prose to inspect the model, not replace it.
What Verbalization Gives You
- Clear inspection of what is in scope.
- Explicit review of rules, optionality, and dependencies.
- Shared understanding across domain, architecture, and governance audiences.
Model-Grounded Explanation
- Ask natural-language questions about explicit model content.
- Explain local consequences without drifting away from the schema.
- Ground answers in rules, constraints, and citations.
Why Grounding Matters
- Explanations must stay tied to explicit model elements.
- Fluent prose without grounding is easy to over-trust.
- Traceability matters as much as readability.
LLMs as Translators
- LLMs are useful for translating structure into readable explanation.
- They should not be treated as authorities on domain meaning.
- The formal model remains the source of truth.
Question-to-Model Mapping
- Map the user question to the relevant model region.
- Focus the explanation on the concepts and rules in scope.
- Keep context while narrowing attention.
Progressive Disclosure
- Reveal detail on demand while preserving formal context.
- Drill down from overview to local rules and implications.
- Support review without overwhelming the reader.
Summaries with Traceability
- Generate concise explanations for complex model regions.
- Adapt detail level to the reviewer without changing the content.
- Provide citations to model elements and rules.
What Good Tool Support Should Do
- Verbalize formal structures clearly.
- Enable direct Q&A against the model.
- Expose ambiguity instead of fabricating certainty.
Example Interaction Pattern
- Reviewer asks a question in natural language.
- The system identifies the relevant model structures.
- The answer cites rules and local conceptual elements.
Where This Helps in Practice
- Domain review: validate whether the distinctions are right.
- Architecture review: inspect consequences and implementation implications.
- Governance review: inspect authority, responsibility, and decision conditions.
- Tool usability: make formal models easier to inspect without losing rigor.
The value is not replacing formal modeling. It is adding a better communication layer around it.
Limits
- Fluent explanation is not enough without real grounding.
- Incomplete or contradictory models still produce weak explanations.
- Traceability back to the formal source is non-negotiable.
LLMs extend formal verbalization. They do not remove the need for disciplined modeling.
ORM Tool Opportunities
- Direct model querying: natural-language questions grounded in fact types, constraints, and verbalizations.
- Constraint explanation: explain why a rule holds and what it restricts.
- Review summaries: concise walkthroughs of local model regions with citations.
Build on existing verbalizations and formal representations rather than bypassing them.
Practical Guardrails
- Do not invent missing distinctions.
- Do not soften hard constraints into vague prose.
- Surface ambiguity explicitly.
- Link answers back to the formal source.
Outcomes
- Faster alignment and fewer misinterpretations.
- Explainable, testable conceptual understanding.
- Accessible communication for diverse audiences.
- Reduced cognitive load through better filtering and presentation.
- Personalized learning paths based on modality preferences.
References & Link
- UDL guidelines • Cognitive load theory (Sweller, 1994)
- Multi-sensory research (Obrist et al., 2016; Zanella et al., 2023)
- Adaptive interfaces and personalization literature