1. The Accountability Problem That AI Strategy Misses
When enterprise AI projects fail in production, a post-mortem would likely focus on data quality, model performance, or change management. Those are definitely common factors; however, a more basic problem is seldom considered. Organizations design AI into workflows before they identify which decisions require a named human who remains accountable and which tasks can be delegated safely. The result is systems that produce outputs nobody fully owns, errors that cannot be traced to a responsible party, and governance structures that do not hold up under regulatory or organizational pressure.
This is not mainly a technical problem so much as a sequencing problem in system design. The question of where AI belongs in a workflow should follow from an analysis of what the workflow is actually doing, which acts create commitments, and who is accountable for those commitments. That is the kind of analysis DEMO, Jan Dietz's Design and Engineering Methodology for Organizations, is built to provide.
In plain terms, DEMO tries to describe the essential structure of an organization independent of the current software stack, workflow tooling, or reporting lines. It gives you a compressed and relatively stable description of a chosen scope of work in terms of who makes commitments, who fulfills them, and which acts merely support those commitments. That is useful for AI design because implementation details change quickly, while accountability structure usually should not.
The argument here is fairly specific: before designing AI into any consequential enterprise workflow, it is worth knowing which acts in that workflow create genuine organizational commitments and require human accountability, and which acts only process or carry information and can be automated without displacing responsibility. If you get that distinction wrong early, you tend to produce systems that are technically functional but organizationally flawed.
2. Why Enterprise AI Pilots Stall
Multiple independent research efforts have converged on the same finding from different angles: most enterprise AI pilots do not translate into sustainable operational results. MIT's NANDA initiative found that only approximately 5% of generative AI programs achieved rapid revenue acceleration. Gartner predicted that 60% of AI projects without AI-ready data foundations would be abandoned through 2026. S&P Global found that the average organization scrapped 46% of its AI proofs-of-concept before reaching production. These figures use different denominators and measure different things; however, they point in the same direction.
The more useful question is why, and the diagnosis from all three sources is consistent: the problem is rarely the model itself. It is the absence of clear grounding in what the organization is actually trying to do and who is responsible for the outcome.
The specific problem that DEMO analysis addresses is that organizations treat AI adoption as a process automation question when it is, at a deeper level, an organizational design question. Automating an existing process faster does not change what the process is for, who commits to its outcomes, or what happens when something goes wrong. DEMO is useful because it forces those prior questions before the automation decisions are made.
3. What DEMO Actually Analyzes
Jan Dietz's framework, described in Enterprise Ontology: Theory and Methodology (Springer, 2006), starts from the premise that an organization should be understood in terms of its essential structure, i.e. the transactions that create value and the commitments that govern them, not in terms of its current implementation in software, processes, or reporting structures. That distinction between essence and implementation is the main point of the analysis.
In plainer terms, DEMO is useful here because it separates the acts that actually create commitments from the acts that only process information or move it around.
At the center of DEMO is a three-way classification of acts performed in organizational workflows.
Original Acts (O-Acts) are the acts that create something genuinely new: a new commitment, a new fact, a new state of affairs that did not exist before the act was performed. They are further decomposed into Production Acts, which execute the actual work that changes the world, and Coordination Acts, which are the request-promise-accept sequences that establish and formalize commitments between actors. O-Acts require human accountability because they involve genuine commitment-making. A machine can execute steps; however, it cannot be held responsible for a commitment in any legally or organizationally meaningful sense.
Informational Acts (I-Acts) involve processing information, such as calculating, querying, analyzing, or generating reports. They do not create new commitments; they support the acts that do.
Documental Acts (D-Acts) involve handling the physical or digital form of information: archiving, transmitting, formatting, printing. They are about the carrier of information rather than its content or the commitments it represents.
The practical significance of this classification for AI system design is pretty direct. I-Acts and D-Acts are, in general, safe candidates for automation, where "safe" means the act itself is not where organizational commitment is created, not that automation is risk-free in every operational sense. They do not replace human accountability because they never carried it in the first place. O-Acts are a different matter. When an AI system performs what is structurally an O-Act, i.e. when it makes or accepts a commitment on behalf of the organization, the accountability question becomes critical. Either a human remains responsible for that commitment, or the commitment has no accountable party behind it.
4. The ProMatch Example
Consider a simplified transaction from a professional services marketplace: matching a client with a service provider. In DEMO terms, the transaction involves a client as requester and the marketplace as executor, with the following essential steps.
The client requests a match. The marketplace promises to find one. The marketplace executes the matching process, which is the core Production Act where value is created. The marketplace states that a match has been found. The client accepts the proposed match.
The C-Acts in this sequence, the request, promise, statement, and acceptance, establish the commitment structure. The P-Act, executing the matching process, is where the organization's judgment and accountability are concentrated.
Now consider where AI plausibly belongs in this transaction. Filtering candidates, surfacing patterns in historical data, ranking options by stated criteria: these are informational acts. An AI can do them well, and doing so does not replace accountability, because the informational acts were never the point where accountability was held. The accountability lives in the production act and the commitment structure around it, i.e. the human judgment about which match is actually right for this client in this situation, and the human commitment that stands behind that judgment.
An AI-assisted matching system that generates ranked candidates for a human to evaluate and commit to is the clearest case and is organizationally sound. But that does not mean a human must manually intervene at the same point in every design. In some cases, the commitment can be pre-authorized by a human-defined policy with monitoring, alerting, auditability, and a clearly named accountable owner behind it. What matters is not simply whether a person clicks approve in the moment. What matters is whether the organization has made the accountability structure explicit and can say who stands behind the outcome when something goes wrong. A system that generates and accepts a match with no such accountable structure has lost its notion of accountability. The defensible design is the one where the responsible human and the governance structure are both clear.
This distinction is easy to state and harder to apply in practice, which is the reason a structured analysis method is useful. Without something like DEMO, the tendency is to look at each step in a process and ask whether AI can do it, rather than asking which steps are commitment-creating acts that require an accountable human behind them.
5. The Authorized versus Autonomous Distinction
One practical test that follows from DEMO analysis is the distinction between authorized AI and autonomous AI. The distinction is not primarily about capability; it is about accountability structure.
Authorized AI operates within a commitment structure that a human has established and remains responsible for. A credit officer who sets eligibility rules and delegates their application to an automated system is still the accountable party for decisions that fall within those rules. The AI is executing a pre-authorized commitment, not making a new one.
Autonomous AI, in the stronger sense, makes or accepts commitments without a named human remaining accountable for them and without an explicit structure for oversight, intervention, and audit. Whether a human reviews each case manually is a secondary design choice that depends on the stakes and the regulatory context. The central design question is who is accountable when the system gets it wrong, and whether that accountability structure is explicit and enforceable.
DEMO analysis makes this question easier to answer because it forces an explicit accounting of which acts in a workflow are Original Acts and who the human actor is behind each one. An organization that cannot answer that question for a given AI system has not designed the system; it has deployed a capability without a governance structure.
6. What This Means Before System Design Begins
The sequencing argument in this article's title is practical rather than theoretical. DEMO analysis is not a prerequisite for every AI project. It is a prerequisite for AI projects where the accountability question is consequential, i.e. where wrong outputs have legal, financial, regulatory, or organizational consequences that need to be traced to a responsible party.
For projects in that category, the sequence matters. Identifying which acts in a workflow are O-Acts, who the accountable human actor is behind each one, and what "authorized delegation" means for each of those acts should happen before the AI system is designed, not as a governance afterthought once the system is in production. In my opinion, the cost of revisiting accountability structure after a system is deployed is substantially higher than the cost of analyzing it before design begins.
The practical output of a DEMO-informed analysis is not a diagram. It is a set of decisions: which acts can be automated without accountability concerns, which require case-by-case human judgment, and which can be delegated under a human-owned authorization structure with appropriate monitoring, audit trail design, and escalation paths. Those decisions inform system design, validation requirements, and the handling of cases that fall outside the authorized parameters.
For the governance architecture that implements these delegation decisions in a running system, see Intent-Driven AI Delegates.
References
- Dietz, Jan. Enterprise Ontology: Theory and Methodology. Springer, 2006.
- MIT Project NANDA. "The GenAI Divide: State of AI in Business 2025." July 2025. Based on 150 executive interviews, a survey of 350 employees, and an analysis of 300 public AI deployments.
- Gartner. "Lack of AI-Ready Data Puts AI Projects at Risk." February 26, 2025. Analyst: Roxane Edjlali.
- S&P Global Market Intelligence. "Voice of the Enterprise: AI and Machine Learning, Use Cases 2025." Survey of 1,006 IT and business professionals across North America and Europe.