Chapter 5
The OLRF Framework — Components and Their Roles
Last updated: 2026-04-10 Open for review
“For every complex problem there is an answer that is clear, simple, and wrong.” H. L. Mencken, The Divine Afflatus, New York Evening Mail, 1917
Architecture as Constitutional Expression
The OLRF architecture is not a single system. It is a coordinated set of components, an ecosystem specification rather than a product design, that defines how legal norms are represented, published, accessed, and applied in a machine-mediated governance environment. Its design reflects an insight that has appeared throughout Part I in various formulations and that can now be stated with architectural precision: the problems of digital governance are not primarily problems of insufficient technology. They are problems of insufficient structure. The technology exists. What has been missing is a principled account of how it should be organised: which components should perform which functions, where the constitutional boundaries between them should lie, and how the entire system adapts as the capabilities of AI evolve.
The preceding chapters established three models of AI participation in legal governance (deterministic evaluation, guided evaluation, autonomous reasoning) and five constitutional requirements that apply across all three (traceability, verifiability, accountability, the right to reasons, judicial reviewability). This chapter describes the architectural components through which those requirements are met and shows how each component’s function shifts across the three models. The components are constant. Their roles are variable. The constitutional commitments they serve are not.
The Six Components
The OLRF proposes six core components. Each is described here in terms of what it is and what it does across the three models.
1. The Legislative Text
The legislative text (statute, regulation, administrative guideline) is the foundation of the entire architecture and its point of contact with the existing legal order. It is structured in accordance with established legal document standards, principally LegalDocML1 and Akoma Ntoso2, which provide formal representations of legislative structure, citation conventions, and FRBR-based3 identifiers enabling precise, stable references to individual provisions, sections, subsections, and clauses.
The text layer is the responsibility of the legislature. It is not modified, supplemented, or replaced by any other component of the architecture. It is the reference point against which all other components are measured. This is true in all three models. In Model A, the text is the source from which the Decision Tree is derived. In Model B, it is the primary source that the Legal Agent reads and the Decision Tree validates against. In Model C, it is the sole source from which the agent reasons, with the Decision Tree serving as a retrospective audit instrument.
The dual publication model (described in detail in Chapter 6: The Normative Core --- Decision Trees and the Dual Publication Model) proposes that every piece of legislation to be applied by automated systems be published in two forms simultaneously: the traditional human-readable text and a machine-applicable normative specification. These two forms are not alternatives. They are complements. The text remains the primary legal source. The normative specification is a derived artefact: an authoritative account of how the norm is to be applied, published alongside the text, linked to it at sub-paragraph level, and versioned in synchrony with it.
2. The Decision Tree
The Decision Tree is the machine-executable representation of legal logic derived from the legislative text. It encodes the conditions, legal consequences, exceptions, parameters, discretion points, and tests that together specify how a particular norm is to be applied. Its relationship to the statutory text is explicit, typed, and auditable at sub-paragraph level through the sub-normative linkage system (described in Chapter 7: Sub-normative Linkage --- The Fine Thread Between Code and Law).
The Decision Tree’s role varies across the three models:
In Model A, the Decision Tree is the evaluator. It receives validated facts and applies the normative logic deterministically to produce a signed Decision Package. Its execution semantics guarantee that the same facts always produce the same outcome. This is the strongest form of the equal treatment guarantee.
In Model B, the Decision Tree is the validator. It defines the normative structure (conditions, consequences, exceptions, discretion points) against which a qualified Legal Agent’s determination is checked. Where the agent’s result is consistent with the tree’s structure, the validation confirms it. Where it deviates, the deviation is documented and either justified or escalated. The tree does not decide. It verifies.
In Model C, the Decision Tree is the audit protocol. The agent applies the statutory text directly. The tree is consulted retrospectively to document the agent’s determination against a publicly specified normative structure, producing a transparent, auditable record. The tree does not decide and does not verify during the process. It makes visible after the fact.
Across all three models, the Decision Tree is constructed by a responsible authority, published in the Registry, signed cryptographically, and linked to the legislative text through the sub-normative anchor system. Its construction is an act of interpretation (the same interpretive act that has always been required to move from enacted text to administrative practice). What changes is the explicitness of the interpretation, the verifiability of its connection to the text, and the accountability of the interpreter.
3. The Coverage Map
Sub-normative linkage visualises the relationships between Decision Tree and law. It does not show what the Decision Tree leaves out, and why. The Coverage Map addresses this second question.
Every Decision Tree on purpose implements only a part of the norm it refers to. Some provisions require context-dependent weighing of competing factors (such as proportionality assessments or best-interest determinations) that cannot be reduced to fixed decision logic without losing the flexibility the law intends. Others apply to circumstances with a high level of variation that cannot be handled by an appropriate number of rules. And others deliberately grant a high level of discretion to the implementing authority in order to assess each case individually. Where a Decision Tree attempts to automate provisions of this kind, it does not faithfully implement the norm. It narrows it, replacing the legislature’s intended openness with rigid logic.
The Coverage Map is a structured document, published alongside the Decision Tree in the Registry, that classifies every element of the implemented norm according to how the Decision Tree handles it. Six classifications are defined:
Implemented marks those portions of the norm that are fully operationalised in the Decision Tree: the conditions, effects, exceptions, and parameters that are explicitly modelled and evaluated.
Discretionary marks those portions that involve the exercise of human judgment: the elements at which the Decision Tree generates a Discretion Point escalation rather than a deterministic output.
Excluded marks those portions that are deliberately not operationalised in the Decision Tree, for example because they are declaratory, address circumstances outside the system’s scope, or raise legal questions that the responsible authority has decided should be resolved by human officials.
Contested marks those portions whose operationalisation involves an interpretive dispute, whether because the text is ambiguous, judicial opinion is unsettled, or the responsible authority has made an interpretive choice that it acknowledges is contestable.
Model Assignment classifies every element marked as “Implemented” according to the model under which it is processed: Model A (deterministic evaluation by the Decision Tree), Model B (guided evaluation by a Legal Agent, validated against the Decision Tree), or Model C (autonomous legal reasoning by an agent, with retrospective audit against the Decision Tree). This classification is not a technical annotation. It is a normative decision with constitutional implications: it determines the degree of AI autonomy, the form of accountability, and the quality of human involvement that the citizen encounters. The model assignment is subject to the Wesentlichkeitstheorie (Chapter 12): for norm elements that affect fundamental rights, the assignment must be made by the legislature or under sufficiently specific legislative delegation, not left to the implementing authority’s technical discretion4.
Agent Certification Requirement specifies, for every element assigned to Model B or C, the certification class that a Legal Agent must hold in order to operate on that element. The certification class is determined by two factors: the model (Model B requires subsumtion competence, Model C requires autonomous legal reasoning competence) and the constitutional weight of the norm element (an element affecting access to social welfare requires a higher certification class than an element involving routine administrative classification, even if both are assigned to the same model). The certification requirement links the Coverage Map to the agent certification system described in Chapter 10: Agent Certification --- The State Examination for Machine Actors. It ensures that the qualification threshold is not set generically for all automated processes, but calibrated to the specific normative domain, the specific model, and the specific constitutional weight of the decision at stake5.
The six classifications, taken together, provide a complete account of how the responsible authority has chosen to translate the legislature’s will into operational form. They document not only what is automated and what is not, but also how it is automated (under which model), and who is qualified to automate it (at which certification level). Every Coverage Map is therefore, in substance, a document of democratic accountability.
The Coverage Map serves three functions that no other component of the OLRF architecture provides.
For legal and constitutional oversight, it is the instrument of democratic control over the scope and mode of automation. A legislature that publishes an authoritative Decision Tree alongside a statute exercises direct control over how the norm is applied. But a legislature reviewing an interpretive Decision Tree published by an administrative authority needs to know what has been automated, under which model, with which agent certification requirements, what has been preserved for human judgment, and what has been deliberately excluded. Without a Coverage Map, the legislature cannot exercise meaningful oversight of whether the authority’s implementation choices are consistent with legislative intent. With it, those choices are explicit, documented, and challengeable through normal mechanisms of legislative oversight.
For courts and auditors, the Coverage Map is the first document to consult when reviewing a challenged automated decision. It tells the court, immediately and with precision, whether the aspect of the norm at issue falls into the Implemented, Discretionary, Excluded, or Contested portion of the Decision Tree. If it is Implemented, the court can examine the model assignment and determine whether the element was processed under Model A (in which case the Decision Package records a deterministic evaluation path), Model B (in which case it records the agent’s subsumtion and the validation framework’s assessment), or Model C (in which case it records the agent’s full reasoning chain and the retrospective audit’s assessment). The court can also verify, through the certification requirement, whether the agent that produced the determination held the required certification at the time of the determination. If the element is Discretionary, it can examine the discretion record to verify that human judgment was lawfully exercised. If it is Excluded, it can assess whether the exclusion was appropriate. If it is Contested, the court has before it the implementing authority’s own acknowledgement of interpretive uncertainty, which serves both as evidence and as a potential ground of challenge.
For citizens, the Coverage Map is, when rendered in accessible language, the most honest account of what the system can and cannot do. A citizen whose application was processed automatically and who received an adverse decision needs to know not only whether the relevant aspect of the norm was automated, but how: under which model, by which type of agent, and with what qualification. A citizen whose circumstances fall into a discretionary or excluded category needs to know that human judgment was involved, and by whom it was exercised. The Coverage Map, accessed through the Registry and explained by the AI layer in plain language, is the foundation of meaningful procedural rights in an automated governance environment.
There is a further, less obvious function of the Coverage Map: it disciplines the process of Decision Tree development itself. The requirement to produce a Coverage Map (to classify every portion of the implemented norm, to assign a model, to specify a certification requirement, to justify exclusions, to acknowledge contested interpretations) forces the responsible authority to engage with the norm in its entirety, rather than operationalising the straightforward parts and leaving the difficult parts implicit. In practice, the process of filling in the Coverage Map amounts to a structured legal review of the implementation, conducted at the level of sub-normative detail that paragraph-level documentation cannot support. The model assignment and the certification requirement add a further layer of discipline: the authority must justify not only what it automates, but how much autonomy it grants to the automating agent, and how high it sets the qualification bar. Authorities that have undertaken this process consistently report that it surfaces implementation errors, clarifies previously implicit interpretive choices, and identifies gaps in the Decision Tree that would otherwise have remained invisible until they produced an incorrect decision in production.
Relationship to Article 14 of the EU AI Act
The Coverage Map has a direct structural relationship to the human oversight requirements of the EU AI Act. Article 14 requires that high-risk AI systems be designed so that natural persons can effectively oversee them during use. The provision specifies that human overseers must be able to understand the system’s capabilities and limitations, to correctly interpret its output, and to decide in any particular situation not to use the system or to override its result. Article 14 also requires that oversight measures be commensurate with the risks, the level of autonomy, and the context of use of the system.
However, Article 14 does not specify how a system should document the boundary between automated operation and human judgment, nor does it require an explicit classification of which parts of the applicable norm are fully automated, which are escalated for human decision, and which are excluded from automation altogether. The Coverage Map fills precisely this structural gap. By classifying every element of the implemented norm as Implemented (under a specified model, with a specified certification requirement), Discretionary, Excluded, or Contested, it provides the documentary infrastructure that makes effective human oversight, as required by Article 14, operationally possible.
In concrete terms: the Discretionary classification identifies the exact points at which the system is designed to escalate to a human decision-maker, thereby defining the scope within which Article 14’s oversight requirement applies in real time. The model assignment determines the form of oversight: under Model A, the human oversees a deterministic output; under Model B, the human oversees an agent’s subsumtion validated against the tree; under Model C, the human approves an autonomous determination assessed by retrospective audit. The certification requirement ensures that the system being overseen has met the qualification threshold that the responsible authority has set for the relevant normative domain. The Excluded classification identifies the portions of the norm that the system does not process at all, informing overseers (and courts) that these elements require entirely human handling. The Contested classification signals to overseers that the system’s logic at a given point rests on an interpretive choice that is acknowledged as disputable, requiring heightened attention during oversight. Without such a structured account, Article 14’s requirement for effective human oversight risks remaining abstract: a legal obligation without a technical instrument to give it concrete, verifiable form. The Coverage Map is that instrument.
4. The Sub-normative Linkage and Anchor System
Every element of a Decision Tree is linked to the specific unit of legislative text from which it derives: the sentence, the clause, the sub-clause, the numbered item. This linkage is not a generic reference to a paragraph. It is a typed connection, carried by one of five anchor types (derived_from, constrained_by, delegated_by, exception_from, defined_in), that characterises the nature of the normative derivation.
The sub-normative linkage system is described in detail in Chapter 7. Its function is constant across all three models: it ensures that the normative basis of every element in the Decision Tree is traceable to the specific legislative text from which it derives. What changes is when and how the linkage is used.
In Model A, the linkage is verified at construction time and serves as the basis for judicial review of the tree itself.
In Model B, the linkage serves as the reference structure against which the Legal Agent’s reasoning is validated. When the agent applies a provision, the validation framework checks whether the agent’s interpretation is consistent with the tree’s anchor for that provision.
In Model C, the linkage is the retrospective documentation tool. The agent’s determination is mapped against the tree’s anchor structure to produce a record of which statutory provisions the agent relied upon and whether its application is consistent with the tree’s interpretation.
5. The DataPoint Schema
Every Decision Tree specifies the facts it requires through a formal DataPoint schema: which facts are needed, what types they must be, from what sources they should be obtained, what confidence thresholds apply, and what binding hints guide the fact-finding agent to the authoritative source for each value.
The DataPoint schema is, in practical terms, the contract between the normative layer and the fact-finding layer. Its function shifts across models in a way that reflects the broader shift in the Decision Tree’s role:
In Model A, the DataPoint schema defines the required inputs for deterministic evaluation. Facts that do not conform to the schema are rejected before they reach the evaluation engine.
In Model B, the DataPoint schema defines the validation criteria for the Legal Agent’s factual record. The agent assembles facts according to the schema’s requirements, and the validation framework checks compliance before the normative validation proceeds.
In Model C, the DataPoint schema defines the audit specification for the agent’s factual basis. The retrospective check verifies whether the agent’s determination was based on facts that would have met the schema’s requirements.
Across all models, the DataPoint schema plays a critical role in addressing the fact framing problem identified in Chapter 4. It requires the agent to document not only the value of each fact but also the interpretive choice that produced it, the source consulted, and the confidence level assigned. This documentation is the architectural safeguard against the risk that intelligent agents steer outcomes through fact selection rather than through normative evaluation.
6. The Registry
The Registry is the public, append-only, cryptographically secured repository of Decision Trees, Coverage Maps, DataPoint schemas, and (in Models B and C) agent certification records. It is the architectural equivalent of the Official Gazette for machine-applicable law: the authoritative record of which normative specifications are in force, in which version, for which period, and under whose signature.
The Registry’s function is constant across all three models. It provides:
Temporal addressability. Every version of every Decision Tree is permanently accessible, enabling the reconstruction of exactly which normative specification was in force at the moment of any past decision.
Cryptographic integrity. Every Registry entry is signed by the publishing authority and verifiable by any party with access to the public key.
Federation. Each jurisdiction maintains its own Registry instance under its own governance. Cross-jurisdictional queries are handled through standardised interfaces without requiring any jurisdiction to cede normative sovereignty.
Public accessibility. The Registry is open to inspection by courts, auditors, legislators, civil society, and citizens. The normative basis of automated governance is not proprietary. It is public infrastructure.
In Models B and C, the Registry’s scope expands to include agent certification records: the formal documentation of which Legal Agents have been certified for which legal domains, at what capability level, and under what conditions. This expansion reflects the shift in the locus of normative authority from the Decision Tree (Model A) toward the agent (Models B and C), and the corresponding need for public accountability of the agent’s qualifications.
How Data Flows Through the System
An architecture is most clearly understood through the movement of information across it. The following describes a complete end-to-end flow under each model, using the example of a citizen benefit application.
Under Model A:
A citizen submits an application. An AI agent processes the submission: extracting relevant data points, querying applicable registers, resolving identity, and assembling a typed, validated factual record. This step is probabilistic. The agent queries the Registry to identify the applicable Decision Tree. The factual record is submitted to the evaluation service. The evaluation service applies the Decision Tree deterministically and produces a signed Decision Package. Where the tree identifies a Discretion Point, the evaluation pauses, the case is escalated to a human official, and the process resumes after the official’s determination is recorded. The Decision Package is delivered to the citizen with a plain-language explanation.
Under Model B:
The fact-finding step is identical to Model A. But after assembling the factual record, the Legal Agent goes further: it reads the applicable statutory text, identifies the relevant provisions, subsumes the facts, and produces a result. This decision is then submitted to the validation service, which checks it against the Decision Tree’s structure. If the validation identifies a deviation/difference between the agent’s decision and the Decision Tree’s structure, the nature of the deviation is classified and documented. Not every deviation is an error. Some deviations reflect the agent’s ability to handle open texture, exceptions, or contextual nuances that the tree’s formalisation cannot capture, which is precisely the reason Model B exists. The validation framework distinguishes between outcome deviations (where the agent reaches a different result), reasoning path deviations (where the result is the same but the legal reasoning differs), fact classification deviations (where the agent interprets the factual basis differently than the DataPoint schema expects), and scope deviations (where the agent considers norm elements that the tree does not cover). Each type is documented in the Decision Package. Outcome deviations and structural deviations that cannot be mapped against the tree are routed to a human official for review. Reasoning path deviations and fact classification deviations are logged and available for audit but do not automatically require human intervention, unless the deviation pattern across a population of cases suggests a systematic divergence that warrants review.. At Discretion Points, the process follows the same escalation as Model A, with the agent providing the AI-Assistance-Package to support the official’s judgment.
Under Model C:
The Legal Agent performs the full process: fact-finding, statutory interpretation, subsumption, and determination. No validation against the Decision Tree occurs during the process. After the determination, the agent’s reasoning and result are documented against the Decision Tree’s structure in a retrospective audit record. The audit record identifies where the agent’s application is consistent with the tree and where it deviates. The result is submitted for human approval before it becomes binding. At Discretion Points, the agent identifies the need for human judgment and escalates, as in Models A and B.
In all three models, the final output is a Decision Package: a structured, signed, cryptographically verified record of the complete decision pathway. In Model A, the Decision Package records a deterministic evaluation. In Model B, it records a validated agent determination. In Model C, it records an approved agent determination with a retrospective audit record. The format is consistent. The provenance differs. The constitutional commitment (a complete, verifiable, challengeable record of every automated determination) is constant.
Use Case Taxonomy: Three Dimensions
The data flow described above illustrates one scenario among many. The OLRF architecture is designed to serve a broader space of use cases, which can be classified along three independent dimensions.
Dimension 1: Initiating Actor
Four types are distinguished. Citizen-initiated and enterprise-initiated workflows begin with a human submission. Authority-initiated workflows begin with a public body acting on its own motion. Event-initiated workflows are triggered by a registered life event or system state (a birth, a change of address, an exceeded threshold). Machine-initiated workflows occur when an autonomous system evaluates its own operations against applicable norms as part of its operational cycle, with no human actor present at the moment of initiation.
Dimension 2: Human Presence
This is the constitutionally decisive dimension. Four classes are defined.
Class H0 designates a fully automated workflow with no Discretion Point and no human actor present. Constitutionally permissible only for bound decisions whose outcome follows deterministically from statute. Available only in Model A.
Class H1 designates a workflow with at least one Discretion Point. A human official is present at a formally specified node. This is the standard class for the majority of complex administrative decisions. Available in all three models.
Class H2 designates a workflow with no Discretion Point in the individual case but systematic human oversight at the population level through Coverage Map analysis and audit detection. Available in Models A and B.
Class H3 designates a workflow in which human presence is confined entirely to the design and certification phase: the construction of the Decision Tree, the certification of the Legal Agent, the drafting of the Coverage Map. The process itself runs without human involvement. This is the class of machine-to-machine regulatory compliance workflows. Available in Models A and B for preventive and analytical outputs only.
The constitutional logic is visible in the relationship between Human Presence class and output type: the further human presence recedes from the individual process, the further the output recedes from direct legal effect on individuals. No Class H3 workflow produces a binding determination addressed to a citizen. This is not a coincidence. It is the architecture working as designed.
Dimension 3: Normative Function of the Output
Five output functions are distinguished: binding (an administrative act with direct legal effect), constitutive-internal (an intermediate determination within a larger proceeding), preventive (a conformity assessment produced before a planned action), reconstructive (a retrospective retrieval to support review of a past determination), and analytical (an evaluation against a draft norm or test dataset, with no legal effect intended).
The taxonomy is important because it reveals which constitutional arguments apply to which deployment scenarios and which governance mechanisms each class of workflow requires. A binding output addressed to a citizen under Model B requires agent certification, normative validation, and Discretion Point enforcement. An analytical output under Model C used for legislative impact assessment requires none of these, because it produces no legal effect. The architecture adapts its requirements to the constitutional weight of the output.
Footnotes
-
OASIS, LegalDocML Technical Committee Charter, 2012. URL: https://www.oasis-open.org/committees/legaldocml/. LegalDocML is the OASIS Technical Committee responsible for the standardisation of Akoma Ntoso. In practice, the terms LegalDocML and Akoma Ntoso are often used interchangeably ↩
-
OASIS, Akoma Ntoso Version 1.0: Part 1, Vocabulary, OASIS Standard, August 2018. URL: http://docs.oasis-open.org/legaldocml/akn-core/v1.0/akn-core-v1.0-part1-vocabulary.html. Akoma Ntoso (from the Akan phrase for “linked hearts”) is an XML vocabulary for parliamentary, legislative, and judiciary documents, originally developed under the auspices of the United Nations Department of Economic and Social Affairs (UN/DESA) and subsequently standardised through OASIS. ↩
-
IFLA Study Group on the Functional Requirements for Bibliographic Records, Functional Requirements for Bibliographic Records: Final Report, IFLA, 1998 (amended 2009). URL: https://www.ifla.org/publications/functional-requirements-for-bibliographic-records ↩
-
The proposition that the model assignment is a Wesentlichkeits-relevant decision is developed in Chapter 12, fn. 51. The Coverage Map’s model assignment field is the documentary form through which this constitutional requirement is operationalised. The assignment must be traceable to a legislative basis: either the legislature itself specifies the model in the enabling statute, or it delegates the assignment to the responsible authority under sufficiently specific criteria. A model assignment that lacks a legislative basis is constitutionally deficient for the same reasons that a Decision Tree without sub-normative linkage is legally deficient: the exercise of public power cannot be traced to its democratic source. ↩
-
The agent certification requirement in the Coverage Map connects two architectural elements that would otherwise remain disconnected: the model assignment (which determines the degree of AI autonomy) and the certification system (which determines the qualification of the agent operating at that degree of autonomy). Without this connection, a responsible authority could assign a norm element to Model C (maximum autonomy) while specifying only a minimal certification requirement, effectively permitting an unqualified agent to exercise autonomous normative reasoning. The certification requirement in the Coverage Map prevents this: it forces the authority to make the qualification decision visible alongside the autonomy decision, and it subjects both decisions to the same democratic scrutiny (legislative review, judicial review, civil society contestation) that applies to the Coverage Map as a whole. ↩