OLRF

The Rules That Run the World

Glossary

A2A Protocol (Agent to Agent Protocol)

An open peer to peer communication standard, published by Google in 2025, that governs how AI agents coordinate with one another in multi-agent workflows. In the OLRF architecture, A2A ensures that every task in a multi-agent legal process is formally attributed to the agent that performed it, that results are immutable once transmitted, and that escalation to human judgment is handled through a formally specified mechanism. A2A is the enforcement instrument for the constitutional separation principle on the agent coordination side, complementing MCP on the AI access side.

Agent Certification

The formal, structured, publicly documented process through which the normative competence of an AI agent is assessed before it is permitted to exercise normative functions under Models B or C. Certification is domain-specific (issued per normative domain), model-specific (Model B and Model C require different certification classes), version-bound (each certified version of the agent is identified; material changes require recertification), test-based (assessed through structured tests, not documentation), and published in the Registry. The certification system is the constitutional mechanism by which the state ensures that machine actors exercising normative functions meet qualification standards analogous to those required of human officials. See Chapter 10.

Agent Certification Credential

A machine-readable, cryptographically verifiable, Registry-published credential that an AI agent presents to the interface server at Control 5. The credential specifies the normative domain, the model (B or C), the agent version, the date of certification, and the validity period. It is issued by the certifying body, verified at runtime by the evaluation engine, and included in the signed Decision Package under Models B and C.

Agent Certification Requirement

A classification in the Coverage Map that specifies, for every element assigned to Model B or C, the certification class that a Legal Agent must hold in order to operate on that element. The certification class is determined by the model and the constitutional weight of the norm element. It is subject to democratic scrutiny as part of the pre-publication review of the Coverage Map.

AI Assistance Package

The structured output generated by four specialised MCP tools when the OLRF evaluation engine reaches a Type B Discretion Point. The package contains case law retrieved from the relevant administrative court database, precedent analysis drawn from the authority’s own anonymised decision record, a proportionality corridor map showing the range of outcomes courts have recognised as lawful, and a consistency check against prior Discretion Point decisions. The package is subject to five binding constraints: no recommendation or weighting of outcomes, source attribution for every legal claim, clear separation of legal framework from individual facts, explicit disclosure of the AI system’s limitations, and documented temporal currency of the sources used.

Anchor Type

A formal classification of the relationship between a Decision Tree element and the legislative text passage to which it is linked. Five anchor types are defined. “Derived from” expresses direct normative derivation. “Constrained by” expresses a limiting relationship. “Delegated by” expresses the relationship between a Parameter and the provision that authorises its value to be set by subordinate instrument. “Exception from” expresses the relationship between an Exception element and the general rule it departs from. “Defined in” expresses the relationship between a Decision Tree element and a definitional provision elsewhere in the legal order. Together the five types constitute a formal grammar of normative derivation.

Append-Only

A technical property of the OLRF Registry requiring that no entry, once published, is ever deleted, overwritten, or modified. Every superseded version of a Decision Tree remains permanently accessible in the Registry alongside its successors. This property is the technical implementation of the constitutional principle that the normative basis of a decision applied to a citizen cannot be retroactively erased.

Capability Boundary

One of two boundaries enforced by the interface layer’s fifteen controls. The capability boundary (Controls 1 through 4 and 6 through 15) ensures that every interaction is authenticated, authorised, schema-conformant, integrity-protected, logged, and cryptographically bound. It prevents technical overreach. See also: Qualification Boundary.

Certifier Pattern

The deepest of the three docking patterns through which existing AI agents connect to the OLRF infrastructure. Under the Certifier Pattern, an agent submits to formal certification (Chapter 10), receives a Registry-published credential, and gains the ability to operate in any jurisdiction that publishes OLRF-conformant Decision Trees, subject to the Coverage Map’s certification requirements. The Certifier Pattern provides market access for the agent and quality assurance at scale for the ecosystem. See Chapter 17.

Class A Conformance (Normative Transparency)

The entry level of OLRF conformance. A Class A implementation requires the publication of Decision Trees in the Registry with valid sub-normative linkage to the statutory text, a Coverage Map documenting the scope of automation (including model assignments and agent certification requirements), a valid cryptographic signature from the responsible authority, and a test suite demonstrating normative behaviour across representative scenarios. Class A does not require operational integration with a live evaluation engine. Its primary value is transparency: it makes the responsible authority’s interpretation of the law publicly verifiable for the first time. Class A is also the entry point for the Connector Pattern (Chapter 17), through which existing systems can connect to the OLRF without replacing their internal infrastructure.

Class B Conformance (Operational Integration)

The intermediate level of OLRF conformance. A Class B implementation adds operational integration of a Class A Decision Tree with a live evaluation engine that processes actual citizen cases through the MCP interface and produces signed Decision Packages. Class B encompasses both Model A (deterministic evaluation) and the beginning of Model B (guided evaluation by certified Legal Agents, validated against the Decision Tree). Every determination made by a Class B system is permanently associated with the specific Registry version of the Decision Tree against which it was evaluated and, under Model B, with the certification credential of the agent that performed the subsumtion.

Class C Conformance (Full Three-Model Integration)

The complete level of OLRF conformance. A Class C implementation operates the full three-model architecture: Model A (deterministic evaluation), Model B (guided evaluation with validated agent subsumtion), and Model C (autonomous legal reasoning with retrospective audit). Class C additionally integrates multi-agent coordination, cross-jurisdictional norm evaluation through the federated Registry network, the complete agent certification lifecycle including monitoring, recertification, and revocation, and the composite audit trail.

Composite Audit Trail

A structured assembly of individual audit records from a multi-agent A2A workflow, cryptographically linked to form a complete and verifiable account of the entire legal process. The composite audit trail records not only which agent performed which step but also which certification each agent held at the moment of performance. It serves for multi-agent processes the same accountability function that the Decision Package serves for single-norm evaluations.

Conformance Class

One of three levels of OLRF implementation completeness (Class A, Class B, Class C), each constituting a genuine constitutional improvement on the pre-OLRF baseline. The conformance class model is the mechanism by which adoption becomes institutionally tractable: authorities begin at Class A and deepen as institutional capacity develops.

Connector Pattern

The lightest of the three docking patterns through which existing Law-as-Code systems connect to the OLRF infrastructure. Under the Connector Pattern, the existing system continues to operate as before. A translation layer produces a conformant OLRF Decision Tree from the existing system’s normative specification. The tree is published in the Registry with sub-normative linkage, a Coverage Map, and a cryptographic signature. The existing system gains public promulgation, sub-normative anchoring, and democratic oversight without changing its internal format, evaluation engine, or operational workflows. See Chapter 17.

Coverage Classification

One of six formal categories used in the Coverage Map. “Implemented” marks provisions fully operationalised in the tree. “Discretionary” marks provisions that involve one or more Discretion Points. “Excluded” marks provisions deliberately not operationalised, with documented justification. “Contested” marks provisions where the responsible authority acknowledges genuine interpretive dispute. “Model Assignment” specifies whether an implemented element is processed under Model A, B, or C. “Agent Certification Requirement” specifies the certification class required for agents operating on elements assigned to Model B or C.

Coverage Map

A structured document published alongside every Decision Tree in the Registry, mapping the entire normative content of the implemented provision against the tree’s elements using six coverage classifications. The Coverage Map documents not only what is automated and what is not, but also how it is automated (under which model) and who is qualified to automate it (at which certification level). It is the primary instrument of democratic control over the scope, mode, and quality of automation.

Decision Package

The complete, structured, cryptographically signed record produced by the OLRF evaluation engine every time a Decision Tree is evaluated against a set of facts. Its format is constant across the three models, but its content varies. Under Model A, it records the evaluation path, the subsumption graph, the legal anchors, the discretion record where applicable, and the final legal effects. Under Model B, it additionally records the agent’s subsumtion, the validation framework’s assessment, and any deviation classification. Under Model C, it additionally records the agent’s full reasoning chain, the retrospective audit’s assessment, and the agent’s certification credential. The Decision Package is the OLRF’s structural implementation of the constitutional duty to give reasons.

Decision Tree

The central normative artefact of the OLRF. A formally specified, executable representation of the logic of a legal norm, constructed by the responsible authority, published in the Registry, and linked sub-normatively to the statutory text from which every element derives. The Decision Tree is not a data structure or flowchart. It is executable code: given a set of typed, validated input facts, it traverses its branches, evaluates its conditions, invokes its parameters, encounters its Discretion Points, produces its legal effects, and terminates in a determination.

Deterministic Execution

The property of an OLRF evaluation engine by which the same set of typed, validated input facts always produces the same output effects, on every occasion, without exception, regardless of which conformant engine performs the evaluation. Deterministic execution is the technical instrument through which the constitutional requirement of equal treatment is extended into automated governance. Under Model A, the entire evaluation is deterministic. Under Models B and C, the Decision Tree provides the deterministic validation baseline against which agent reasoning is checked.

Deviation Record

A structured record produced under Model B when the Legal Agent’s subsumtion differs from the Decision Tree’s deterministic evaluation. The deviation record classifies the deviation (within the validation corridor, outside the corridor but below the escalation threshold, or above the escalation threshold), documents the agent’s reasoning, and preserves the information necessary for judicial review of whether the deviation was legally permissible. Population-level analysis of deviation records reveals systematic patterns that individual case review would not detect.

Discretion Point

A formally specified node in a Decision Tree at which the applicable statutory provision confers judgment on a human official rather than providing a deterministic rule. At a Discretion Point, evaluation is suspended, the established facts and where applicable an AI Assistance Package are presented to the responsible official, and evaluation resumes only after the official has completed a structured reasoning record. Three types are defined. Type A (Unassisted) involves no AI-generated assistance. Type B (AI-Assisted) involves the full AI Assistance Package. Type C (Escalated) routes the decision beyond the processing official to a supervising authority, collegial body, or court.

Docking Pattern

One of three structured mechanisms through which existing Law-as-Code systems and AI agents connect to the OLRF infrastructure: the Connector Pattern (publish without replacing), the Validator Pattern (validate without constraining), and the Certifier Pattern (qualify for cross-system operation). The three patterns represent increasing depth of integration with correspondingly increasing institutional commitment. See Chapter 17.

Dual Publication Model

The institutional arrangement by which every piece of legislation applied by automated systems is published in two complementary forms: the traditional human-readable statutory text, and a machine-executable Decision Tree representing the legal logic that text embodies. Both forms carry the authority of the responsible publication body.

Ermessen

The German administrative law doctrine of discretion: the authority conferred on a public official by a statutory provision to choose between lawful alternatives in the individual case. Ermessen is exercised pflichtgemäß (in accordance with the purpose and limits of the conferring provision) and is subject to review for four categories of error: Ermessensausfall (failure to exercise discretion at all), Ermessensüberschreitung (exercise beyond the legal limits), Ermessensunterschreitung (failure to exercise the full available range), and Ermessensfehlgebrauch (exercise for impermissible purposes). The OLRF’s Discretion Point architecture is designed to prevent all four categories structurally.

Evaluation Engine

A conformant software implementation that traverses a Decision Tree in accordance with the OLRF evaluation semantics, processes a submitted set of typed, validated facts, verifies the agent’s certification at Control 5, produces a signed Decision Package, and communicates with AI clients through the MCP interface. Under Model A, the engine performs the complete normative evaluation. Under Model B, the engine validates the Legal Agent’s subsumtion against the Decision Tree. Under Model C, the engine provides the retrospective audit baseline.

Federated Registry

An architecture in which each jurisdiction operates its own sovereign Registry instance, under its own law, with control over its own norms, signing keys, access policies, and governance procedures. Cross-border interaction is achieved through standardised interfaces that allow one jurisdiction’s systems to query another’s Registry under defined conditions, without administrative access to the other jurisdiction’s internal infrastructure. Federation preserves national control while creating the conditions for continental interoperability.

Law as Code

The structured, authoritative, machine-executable provision of law. A specification of how normative content (including conditions, legal consequences, exceptions, parameters, and defined discretion points) is represented in a formally defined structure that digital systems can execute. The definition has four elements: structured (formally defined format with explicit semantics), authoritative (published by or under the authority of the responsible institution), machine-executable (evaluable against facts to produce legally consequential determinations), and provision of law (concerning enacted law, not policy guidance or contractual terms). Law as Code is distinguished from the broader category of Rules as Code, which encompasses any structured representation of rules for machine processing. See Chapter 18.

An AI system that performs normative functions under Models B or C. Under Model B, the Legal Agent subsumes facts under legal concepts, with its output validated against the published Decision Tree. Under Model C, the Legal Agent reasons autonomously from statutory text, with its output assessed through retrospective audit. Legal Agents must hold domain-specific, model-specific certification (Chapter 10) and present their certification credential at Control 5 of the interface layer before they are permitted to operate.

MCP (Model Context Protocol)

An open standard, published by Anthropic in 2024, that defines a structured interface between AI systems and external service providers. In the OLRF architecture, MCP is the architectural boundary through which AI systems access the normative layer. Five tools are defined: evaluate_norm, query_norm, list_norms, validate_facts, and explain_decision. The function of evaluate_norm varies across the three models: under Model A, it performs deterministic evaluation; under Model B, it validates agent subsumtion; under Model C, it provides retrospective audit assessment.

Model A (Deterministic Evaluation)

The first of three models describing the Decision Tree’s variable function. Under Model A, the Decision Tree performs the complete normative evaluation: it receives facts, evaluates conditions, invokes parameters, produces effects. The AI agent’s role is limited to fact-finding. Model A is the most constrained form of automation and the one with the lowest constitutional risk. It corresponds to the scope of §35a VwVfG (fully automated administrative acts for bound decisions).

Model B (Guided Evaluation)

The second of three models. Under Model B, a certified Legal Agent performs subsumtion (classifying facts under legal concepts), and the Decision Tree validates the agent’s output. Deviations between the agent’s subsumtion and the tree’s deterministic evaluation are classified and documented. Model B preserves the Decision Tree as the normative guardrail while permitting AI-assisted normative reasoning for norm elements that require contextual evaluation beyond what deterministic logic can provide.

Model C (Autonomous Reasoning)

The third and most advanced model. Under Model C, a certified Legal Agent reasons autonomously from statutory text, producing a determination that is assessed retrospectively against the Decision Tree and an audit protocol. The agent must hold the highest certification class (Chapter 10). Model C is the functional equivalent of judicial reasoning applied by a machine actor. It requires either human approval before the determination becomes legally effective or a separate legislative authorisation under Article 22(2) GDPR.

Model Assignment

A classification in the Coverage Map that specifies, for each element marked as “Implemented,” whether the element is processed under Model A, B, or C. The model assignment is a Wesentlichkeits-relevant decision: for norm elements affecting fundamental rights, it must be made by the legislature or under sufficiently specific legislative delegation. See: Wesentlichkeitslehre.

Parameter

A variable element within a Decision Tree whose value is set by subordinate instrument rather than by the primary statute. Examples include income thresholds, qualifying periods, benefit rates, and penalty amounts. Every Parameter carries a “delegated by” anchor to the provision that authorises its value to be set by subordinate instrument.

Probabilistic System

A computational system that produces outputs that are statistically likely to be correct for a given input, based on patterns learned from training data. Probabilistic systems are appropriate for fact-finding tasks in governance contexts. Under the three-model framework, the role of probabilistic systems expands from fact-finding only (Model A) to guided subsumtion (Model B) to autonomous reasoning (Model C), but the constitutional safeguards (certification, validation, audit) escalate in proportion to the degree of probabilistic involvement in normative functions.

Qualification Boundary

One of two boundaries enforced by the interface layer’s fifteen controls. The qualification boundary (Control 5) ensures that every agent performing normative functions holds a valid, domain-specific, model-specific, version-bound certification. It prevents normative overreach. See also: Capability Boundary.

Responsible Authority

The public body that constructs and publishes a Decision Tree for a statutory provision within its administrative competence. The responsible authority signs the Decision Tree with a cryptographic signature, publishes it in the Registry, determines the model assignment and agent certification requirements in the Coverage Map, and bears constitutional accountability for the interpretive choices embedded in the tree’s construction.

Retrospective Audit

The assessment process through which a Model C determination is evaluated after the autonomous agent has produced it. The audit protocol assesses the agent’s reasoning chain against the Decision Tree, the statutory text, and defined quality criteria (legal correctness, consistency, non-discrimination, proportionality). The audit does not immunise the determination from judicial challenge: a court retains the authority to disagree with the audit’s conclusion on independent grounds.

RUML (Rulemapping Logic Format)

A JSON-based file format developed by the Rulemapping Group that represents the complete subsumtion structure of a legal norm: conditions, legal consequences, exceptions, sub-exceptions, and cross-references, organised in the sequence of legal application. RUML’s structural correspondence to the OLRF Decision Tree is closer than for any other format in the current landscape. The translation from RUML to OLRF Decision Tree is the reference translation for the Connector Pattern. The Rulemap Builder (the visual no-code tool for constructing RUML specifications) serves as the analytical front-end for OLRF Decision Tree authoring. See Chapter 17.

Separation Principle

The foundational constitutional principle of the OLRF architecture. Under Model A, the separation is absolute: probabilistic AI systems perform fact-finding, and the deterministic engine performs normative evaluation. Under Models B and C, the separation is graduated: the Legal Agent participates in normative reasoning, but the Decision Tree provides the validation baseline (Model B) or the retrospective audit baseline (Model C) against which the agent’s reasoning is assessed. The separation is enforced architecturally through the MCP and A2A interface boundaries, through the certification verification at Control 5, and through the cryptographic signing of Decision Packages.

Sub-normative Linkage

The architectural requirement that every element of a Decision Tree carry a typed, verifiable link to the specific unit of legislative text from which it derives, at the level of sentence, clause, or sub-clause rather than merely at the level of article or paragraph number. Sub-normative linkage is what distinguishes the Decision Tree from every preceding form of machine-executable legal specification.

Text Layer

The foundational layer of the OLRF architecture, containing authentic legislative text structured in accordance with LegalDocML and Akoma Ntoso. The Text Layer is the point of contact between the OLRF architecture and the existing legal order.

Three-Model Framework

The analytical structure that describes the Decision Tree’s variable function across three levels of AI participation in normative reasoning. Model A: the tree decides (deterministic evaluation). Model B: the tree validates (guided evaluation by a certified Legal Agent, checked against the tree). Model C: the tree audits (autonomous reasoning by a certified Legal Agent, assessed retrospectively against the tree). The constitutional requirements (equal treatment, reason-giving, promulgation, proportionality, judicial reviewability) are constant across all three models. The mechanisms through which they are satisfied differ.

Validation Framework

The published specification, under Model B, of the criteria by which the evaluation engine checks the Legal Agent’s subsumtion against the Decision Tree. The validation framework defines the deviation corridor (the range of permissible divergence between the agent’s reasoning and the tree’s deterministic evaluation), the deviation classification system, and the escalation thresholds. It is published in the Registry and is judicially reviewable as a normative artefact in its own right.

Validator Pattern

The intermediate docking pattern through which existing AI agents connect to the OLRF infrastructure. Under the Validator Pattern, the agent performs its normative function using its existing reasoning process, and the output is validated against the published Decision Tree. The validation produces confirmation, classified deviation, or escalation. The Validator Pattern provides existing agents with a constitutional legitimacy framework and serves as the operational precursor to formal certification under the Certifier Pattern. See Chapter 17.

Wesentlichkeitslehre

The German constitutional doctrine, developed by the Federal Constitutional Court (BVerfGE 49, 89, Kalkar I), holding that all decisions essential to the realisation of fundamental rights must be made by the legislature itself and may not be delegated to the executive or to automated systems without sufficiently specific statutory guidance. In the OLRF, the Wesentlichkeitslehre applies to three architectural decisions: the dual publication model (the decision to publish enacted law in machine-executable form), the model assignment (the decision whether a norm element is processed under Model A, B, or C), and the agent certification requirement (the decision what qualification threshold applies to the agents that operate under each model). All three decisions must be traceable to a legislative basis for norm elements affecting fundamental rights.

§35a VwVfG

The provision of the German Administrative Procedure Act authorising fully automated administrative acts for gebundene Entscheidungen (bound decisions): decisions in which the applicable conditions and consequences are specified by statute with sufficient precision that no official discretion is required. The provision reflects a deliberate legislative choice that automation is constitutionally permissible for the deterministic application of clear rules and impermissible for decisions requiring the exercise of judgment. Models B and C operate beyond the §35a boundary: they involve normative reasoning that exceeds deterministic evaluation and therefore require a separate legislative basis.

§39 VwVfG

The provision of the German Administrative Procedure Act establishing the Begründungspflicht: the duty of a public authority to accompany discretionary administrative acts with a written statement of reasons. The OLRF’s Decision Package implements this duty as a structural requirement: every evaluation automatically produces a complete, signed reasoning record from which the required statement of reasons can be generated. The content of the reasoning record varies across the three models: evaluation path (A), deviation record (B), full reasoning chain (C).

§40 VwVfG

The provision of the German Administrative Procedure Act requiring that discretionary authority be exercised pflichtgemäß: in accordance with the purpose of the conferring provision and within its legal limits. The four categories of Ermessensfehler cognisable under §40 VwVfG provide the legal framework within which the OLRF’s AI Assistance Package is designed and the Discretion Point reasoning record is structured.