OLRF
Part 4 From Vision to Practice

Chapter 17

Connecting the Ecosystem — From Isolated Projects to Shared Infrastructure

Last updated: 2026-04-10 Open for review

Standards are the vocabulary of trust between strangers. Without them, every transaction begins with negotiation; with them, it begins with confidence.” --- Vint Cerf, Testimony before the United States House of Representatives Committee on Science, Space and Technology, 2012

The Landscape Problem

The OLRF does not enter an empty field. Across Europe, North America, Australasia, and parts of Asia, a growing number of projects have been working on aspects of the same problem: the translation of law into machine-processable form, the automation of public administration through rule engines and AI agents, and the governance of algorithmic decision-making in the public sector. These projects vary enormously in scope, maturity, institutional anchoring, and technical approach. Some are production systems that process millions of determinations per year. Others are research prototypes. Some focus on a single legal domain. Others aspire to general-purpose frameworks. Some are state-funded. Others are open-source community projects. Some use deterministic rule engines. Others use language models. The diversity is real and valuable. It reflects the fact that different communities, facing different institutional constraints and different legal traditions, have arrived at different solutions to aspects of the same underlying problem1.

But diversity without interoperability produces fragmentation. The current landscape of Law-as-Code and automated administration projects is characterised by a pattern that is familiar from the early history of digital infrastructure: many independent systems, each internally coherent, each solving a real problem, and none able to communicate with the others in a structured way. An OpenFisca installation in France cannot exchange normative specifications with a RegelSpraak installation in the Netherlands. A Catala specification of French social welfare law cannot be validated against a Blawx specification of Canadian administrative procedure. An AI agent developed for benefit processing in one jurisdiction cannot be assessed against the normative standards of another. Each project operates within its own format, its own vocabulary, its own governance model, and its own accountability framework. The result is not failure. Each project delivers value within its own domain. The result is that the value remains isolated, and the opportunity for a shared normative infrastructure remains unrealised2.

The fragmentation has a deeper cause than technical incompatibility. Merigoux (2024) identifies it as the consequence of a field that has pursued all-encompassing systems rather than interoperable components. Each project, driven by the ambition to build a complete formal model of its legal domain, has developed its own representation language, its own evaluation semantics, its own toolchain, and its own ecosystem of users, creating what Merigoux calls “a third class of model-makers, distinct from lawyers and programmers, that act as an intermediary and barrier that controls how the worlds of CS and Law interact with each other.” The OLRF’s ecosystem strategy is designed to break this pattern. It does not ask existing projects to adopt a new modelling language. It asks them to publish their normative content through a shared infrastructure that makes it publicly accessible, constitutionally anchored, and cross-jurisdictionally visible, while preserving their internal diversity3.

The OLRF’s response to this landscape is not replacement. It would be strategically unwise, institutionally arrogant, and practically impossible to propose that the projects described above should abandon their existing systems and adopt the OLRF instead. Their systems work. Their communities are invested. Their institutional relationships are established. A strategy that demands replacement will produce resistance rather than adoption.

The response is connection. The OLRF offers the existing ecosystem something that no individual project can provide for itself: a shared normative infrastructure into which different systems can publish, against which different agents can be validated, and through which different jurisdictions can recognise each other’s normative specifications without requiring format uniformity. The OLRF does not ask existing projects to change what they do. It asks them to connect what they do to a common layer that makes their work publicly accountable, cross-jurisdictionally visible, and constitutionally anchored.

What the Ecosystem Needs and Does Not Have

The existing projects share, despite their diversity, a common set of unresolved problems that no individual project can solve alone.

The first is the promulgation problem. Most existing rule engines and Law-as-Code systems produce executable specifications that are technically functional but constitutionally invisible. They operate inside administrative IT systems, accessible to the authority’s technical staff but not to courts, legislatures, civil society, or the affected citizen. The specification may be internally correct. But if it is not publicly available, sub-normatively anchored, and subject to democratic oversight, it fails the constitutional requirements that Part III of this paper has established. The OLRF’s Registry and sub-normative linkage system solve the promulgation problem: they provide a publication infrastructure through which any executable specification, regardless of its internal format, can become a public legal artefact4.

The second is the accountability problem. Existing systems produce decisions, but most do not produce Decision Packages in any structured sense. The citizen receives an outcome. The court, if a challenge is brought, receives an outcome and perhaps a general description of the system’s behaviour. Neither receives a formally structured, cryptographically signed, version-identified record of which normative specification was applied, which facts were decisive, which exceptions were considered, and which version of the executable logic was in force. The OLRF’s Decision Package and audit architecture solve the accountability problem: they provide a structured evidentiary object that any existing system can produce, regardless of its internal evaluation method, if it connects to the OLRF’s output layer.

The third is the qualification problem. As agentic AI systems become more prevalent in public administration, the question of which agents are qualified to perform normative functions becomes urgent. No existing project has a systematic answer to this question. Individual projects may have internal quality assurance processes, but there is no cross-project, cross-jurisdictional standard for what it means for an AI agent to be qualified to subsume facts under legal concepts or to reason autonomously from statutory text. The OLRF’s agent certification system (Chapter 10) solves the qualification problem: it provides a formal, domain-specific, model-specific, test-based certification framework that any agent from any project can submit to, regardless of its internal architecture5.

The fourth is the interoperability problem. A norm specified in OpenFisca’s Python-based parameter structure cannot be directly compared with a norm specified in RegelSpraak’s controlled natural language or in Catala’s default logic. Cross-jurisdictional comparison, cross-system validation, and cross-project learning are blocked by format incompatibility. The OLRF’s canonical Decision Tree format and Registry do not require format uniformity. They require format translatability: the ability to express the normative content of any system’s specification in a common interchange format that preserves the essential structure (conditions, effects, exceptions, parameters, discretion points, sub-normative anchors) while remaining agnostic to the internal representation.

These four problems (promulgation, accountability, qualification, interoperability) are not problems that the existing projects have failed to address through lack of effort. They are problems that arise at a level above any individual project: the infrastructure level. They require a shared solution, not a project-specific one. The OLRF is designed to provide that shared solution.

OLRF Docking Patterns

The OLRF’s ecosystem connection strategy is organised around three docking patterns, each designed for a different type of existing system and a different depth of integration. The patterns are cumulative: a project can begin with the Connector Pattern and deepen its integration over time, without having to commit to the most demanding form of connection from the outset6.

The Connector Pattern: Publish Without Replacing

The Connector Pattern is designed for existing rule engines, Law-as-Code systems, and deterministic automation platforms that already operate in production. The pattern works as follows. The existing system continues to operate as before: its internal format, its evaluation engine, its operational workflows remain unchanged. In parallel, a translation layer produces a conformant OLRF Decision Tree from the existing system’s normative specification. The Decision Tree is published in the Registry with full sub-normative linkage, a Coverage Map, and the responsible authority’s cryptographic signature. The published tree is the OLRF-conformant representation of the same norm that the existing system applies. It does not replace the existing system’s operation. It documents its normative basis in a publicly accessible, constitutionally anchored form.

The Connector Pattern delivers immediate value. The existing system gains public promulgation (its normative basis is now visible to courts, legislatures, and civil society). It gains sub-normative anchoring (every element of its specification is now traceable to the statutory text). It gains Coverage Map documentation (the scope of its automation, including model assignments and certification requirements, is now publicly declared). And it gains version control and cryptographic integrity (every change to the specification is tracked, signed, and historically preserved in the Registry).

The translation layer is the critical component. It must be capable of expressing the existing system’s normative content in the OLRF Decision Tree format without loss of essential structure. The existing systems in the Law-as-Code landscape use different internal representations: OpenFisca uses a Python-based parameter and variable structure, RegelSpraak uses a controlled natural language with formal semantics, Catala uses a default logic with explicit scope management, Blawx uses a visual block-based representation with s(CASP) reasoning, and PolicyEngine uses an OpenFisca fork with national-scale parameter sets. In each case, the translation must preserve the conditions, effects, exceptions, parameters, and discretion points of the original specification, while expressing them in the OLRF’s canonical JSON format.7.

Among the existing formats, RUML (Rulemapping Logic Format) occupies a distinctive position in relation to the Connector Pattern. RUML is an open, JSON-based format developed by the Rulemapping Group that represents the complete subsumtion structure of a legal norm: conditions, legal consequences, exceptions, sub-exceptions, and cross-references to other norms, organised in the sequence and logic of their legal application8. The structural correspondence between RUML and the OLRF Decision Tree is closer than for any other format in the current landscape. Both are JSON-based. Both model the same normative elements (conditions, effects, exceptions, parameters). Both organise those elements in the logic of legal application rather than in the logic of business process flow. Both treat the subsumtion architecture of the norm as the primary object of representation.

The translation from RUML to OLRF Decision Tree is therefore an architecturally straightforward translation and probably the easiest one of all Connector Pattern translations. The normative structure is already present in RUML. What the translation adds is the OLRF’s constitutional infrastructure: the sub-normative linkage (tracing every RUML element to the specific sentence or clause of the statutory text from which it derives), the Coverage Map (classifying every element by implementation status, model assignment, and certification requirement), the cryptographic signature, and the Registry publication. A RUML specification that has been translated into the OLRF format and published in the Registry is no longer merely a machine-readable model of a norm. It is a public legal artefact: sub-normatively anchored, version-controlled, cryptographically signed, and subject to the full apparatus of democratic oversight that Part III of this paper has described.

This proximity has a further consequence for the OLRF’s authoring workflow. The Rulemap Builder, the visual no-code tool through which legal professionals construct Rulemaps, can serve as the analytical front-end for OLRF Decision Tree authoring. The legal professional uses the Rulemap Builder to decompose the statutory norm into its subsumtion structure: conditions, consequences, exceptions, cross-references. The result is a RUML specification. The translation layer converts the RUML specification into an OLRF Decision Tree. The legal professional then completes the OLRF-specific elements that RUML does not capture: the sub-normative anchors (which specific sentence of the statute does this condition derive from?), the Coverage Map classifications (is this element implemented, discretionary, excluded, or contested? under which model? with which certification requirement?), and the test suite. In this workflow, RUML is the analytical layer and the OLRF is the publication layer. The Rulemap Builder makes the legal analysis tractable. The OLRF makes the result constitutionally accountable9.

For the other existing formats, the translation is more demanding but follows the same principle. The OLRF’s approach is pragmatic rather than theoretical. Rather than designing a universal meta-format that can express every possible internal representation, the OLRF specifies a canonical interchange format that captures the normative essentials, and develops specific translation libraries for the major existing formats. The initial translation libraries target four formats: RUML (Rulemapping Logic Format to OLRF Decision Tree, the reference translation because of the closest structural correspondence), OpenFisca (Python parameter structure to OLRF Decision Tree), RegelSpraak (CNL to OLRF Decision Tree), and Catala (default logic to OLRF Decision Tree). Additional libraries can be contributed by the community as new formats become relevant. The translation is not required to be lossless: internal implementation details that do not affect the normative content (performance optimisations, internal data structures, UI-specific metadata) need not be preserved. What must be preserved is the normative structure: the conditions under which legal effects are produced, the exceptions that modify those effects, the parameters whose values determine specific outcomes, and the points at which the specification requires human judgment.10

This pragmatic approach, domain-specific translations rather than a universal meta-format, reflects a deliberate architectural choice. The OLRF follows what Merigoux, drawing on Raymond’s articulation of the Unix philosophy, identifies as the principle that has been consistently violated by AI and Law infrastructure projects: “Write programs that do one thing and do it well. Write programs to work together.” Each translation library does one thing (translates a specific format into the OLRF canonical format) and does it well (preserving the normative structure while discarding implementation-specific details). The libraries work together through the Registry’s standardised publication interface. No library needs to understand any other library’s source format. The interoperability is achieved at the publication layer, not at the modelling layer11.

The Connector Pattern is already technically feasible for several existing projects. The Dutch RegelRecht project has developed a YAML-based specification format with Go and Python evaluation engines, and has explicitly built connectors to RegelSpraak (including a lac-to-regelspraak converter). The OpenFisca ecosystem has a well-documented parameter API that makes systematic extraction of normative content tractable. Catala’s formal semantics provide a precise specification of the normative structure that a translation layer can operate on. And RUML’s JSON-based structure, with its explicit representation of the subsumtion architecture, provides the most direct path to OLRF Decision Tree generation. The Connector Pattern does not require these projects to change. It requires a bridge that connects their existing output to the OLRF’s publication infrastructure.

The Validator Pattern: Validate Without Constraining

The Validator Pattern is designed for agentic AI systems that perform normative functions in public administration: systems that classify applications, assess eligibility, evaluate compliance, or prepare administrative determinations. These systems are already being deployed across European public administrations. They operate with varying degrees of autonomy, from simple document classification to complex eligibility assessment. Most operate without any structured normative validation framework. Their outputs are checked, if at all, through manual sampling or post-hoc quality review.

The Validator Pattern provides these systems with a structured, constitutionally anchored validation mechanism without requiring them to change their internal architecture. The pattern works as follows. The existing agent performs its normative function (subsumtion, classification, assessment) using its existing reasoning process. The output is then submitted to the OLRF evaluation engine, which validates the agent’s determination against the published Decision Tree for the relevant normative domain. The validation produces one of three results: confirmation (the agent’s determination matches the tree’s deterministic evaluation), classified deviation (the agent’s determination differs from the tree’s evaluation, and the deviation is classified according to the deviation classification system described in Chapter 5), or escalation (the deviation exceeds defined thresholds and the case is routed to a human decision-maker)12.

The Validator Pattern is the OLRF’s answer to the Model B integration challenge. It allows existing agents to operate under Model B (guided evaluation with validation) without requiring them to be rebuilt from scratch. The agent continues to reason using its existing methods. The OLRF provides the normative guardrail: a published, sub-normatively anchored Decision Tree against which the agent’s output is checked, a structured deviation record that makes the relationship between the agent’s reasoning and the statutory norm visible, and an escalation mechanism that ensures constitutionally significant deviations trigger human review.

The value for the existing system is substantial. The agent gains a constitutional legitimacy framework that it currently lacks: its determinations are no longer unvalidated AI outputs but validated normative assessments, checked against a publicly published specification, with deviations documented and classified. Courts reviewing the agent’s determinations can inspect not only the output but the validation record: did the agent’s subsumtion match the tree? If not, how was the deviation classified? Was the deviation within permissible bounds? Was the escalation threshold correctly applied? This evidentiary structure transforms the agent’s output from an opaque AI determination into a reviewable normative act.

The Validator Pattern also provides the pathway to formal agent certification (Chapter 10). An agent that consistently operates within the validation corridor, that correctly handles escalation scenarios, and that produces determinations that match the Decision Tree’s evaluation across a representative population of cases has, in effect, demonstrated the competencies that Model B certification requires. The validation record is the evidence base from which the certification assessment is drawn. The Validator Pattern is therefore not merely a quality assurance mechanism. It is the operational precursor to formal certification: the mechanism through which existing agents build the track record that certification requires13.

The Certifier Pattern: Qualify for Cross-System Operation

The Certifier Pattern is designed for AI agents that seek to operate across normative domains, across jurisdictions, or across administrative systems. It is the deepest form of integration and presupposes that the agent has already operated under the Validator Pattern for a sufficient period to demonstrate its normative capabilities.

The pattern works as follows. The agent’s provider submits the agent (in a specific, identified version) for formal certification under the OLRF’s agent certification system (Chapter 10). The certification is domain-specific, model-specific, and version-bound. If the agent passes the certification examination (which includes the Decision Tree’s test suite, adversarial tests, consistency tests, escalation compliance tests, and, for Model C, reasoning chain quality tests), it receives a Registry-published credential that any OLRF-conformant system will recognise.

The value for the agent is market access. A certified agent can operate in any jurisdiction that publishes OLRF-conformant Decision Trees, subject to the Coverage Map’s certification requirements. An agent certified for Model B subsumtion under German income tax law can operate in any German authority that has published a Class B Decision Tree for income tax evaluation, without requiring a separate integration, a separate quality assurance process, or a separate procurement relationship with each authority. The certification credential, verified at Control 5 of the interface layer (Chapter 8), is the agent’s passport to cross-system operation14.

The value for the ecosystem is quality assurance at scale. Instead of every authority conducting its own ad-hoc assessment of every agent it considers deploying, the certification system provides a standardised, publicly documented, independently verified quality baseline. Authorities can deploy certified agents with confidence that their normative capabilities have been assessed against the specific requirements of the relevant domain and model. Courts can verify, from the Decision Package, that the agent that produced a determination held the required certification at the time of the determination. Civil society can inspect the certification record and assess whether the qualification threshold is adequate for the constitutional weight of the norm.

The Certifier Pattern also addresses the sovereignty concern raised in Chapter 14. A jurisdiction that requires OLRF certification for agents operating on its law has established sovereign control over who may apply its law in machine form. The certification requirements are set by the jurisdiction (or by a federated standard-setting process, as described in Chapter 10). The certification is administered under the jurisdiction’s institutional authority. The certification can be revoked by the jurisdiction if the agent’s performance degrades. The agent’s provider, whether a domestic start-up, a European technology company, or a global hyperscaler, must meet the same domain-specific, test-based requirements. The Certifier Pattern is the mechanism through which the sovereignty argument of Chapter 14 becomes an operational reality for the agent layer.

Mapping the Existing Landscape

The three docking patterns apply differently to the major existing projects in the Law-as-Code and automated administration landscape. The following mapping is illustrative, not exhaustive.

OpenFisca and PolicyEngine are deterministic rule engines that model tax and benefit systems at national scale. Their natural entry point is the Connector Pattern: their Python-based parameter and variable structures can be translated into OLRF Decision Trees, published in the Registry, and subject to Coverage Map documentation. PolicyEngine’s existing population-level simulation capability maps directly onto the OLRF’s simulation test category (Chapter 6). The translation library for OpenFisca is a Phase 1 deliverable15.

RegelSpraak and RegelRecht are Dutch initiatives that represent law in controlled natural language with formal execution semantics. RegelSpraak is in production use at the Belastingdienst (Dutch tax authority). RegelRecht is a proof-of-concept by the Dutch Ministry of the Interior that explicitly builds connectors between formats. Their natural entry point is the Connector Pattern, with RegelRecht’s existing connector architecture providing a model for the OLRF translation layer. The controlled natural language of RegelSpraak is particularly amenable to OLRF translation because its formal semantics preserve the normative structure (conditions, effects, exceptions) that the OLRF Decision Tree requires16.

Catala is an academic project that models law using a default logic with explicit scope management. Its formal semantics are the most precise of the existing projects, making it an ideal candidate for the Connector Pattern with a high-fidelity translation. Catala’s approach to default logic (where more specific rules override more general rules, following the legal principle of lex specialis) maps onto the OLRF Decision Tree’s exception and priority structure17.

Blawx is a Canadian project that uses a visual block-based interface with s(CASP) reasoning, supporting defeasibility and open-world reasoning. Its natural entry point is the Connector Pattern, but its open-world reasoning capability also makes it a candidate for the Validator Pattern: a Blawx-based agent that reasons under open-world assumptions can be validated against an OLRF Decision Tree that represents the closed-world deterministic baseline, with deviations classified as Blawx’s open-world reasoning identifies possibilities that the closed-world tree does not consider18.

Agentic administration projects (LLM-based agents that process administrative applications, classify documents, assess eligibility, or prepare determinations) are the most natural candidates for the Validator Pattern. These projects are proliferating across European public administrations, often as pilot projects with limited governance frameworks. The Validator Pattern provides them with the normative guardrail they currently lack: a published Decision Tree against which their output is validated, a structured deviation record, and an escalation mechanism. For agents that have demonstrated consistent performance under the Validator Pattern, the Certifier Pattern provides the pathway to formal qualification and cross-system operation.

The Rulemapping method occupies a distinctive position in this landscape. It is not a rule engine but a methodology for the visualisation and structural analysis of legislative logic. Its output (a visual map of the decision structure of a norm, including conditions, effects, exceptions, and discretion points) is structurally analogous to the OLRF Decision Tree, differing primarily in format rather than in substance. The translation from Rulemapping output to OLRF Decision Tree is architecturally straightforward, because Rulemapping already identifies the normative elements that the OLRF Decision Tree formalises. Rulemapping’s most significant contribution to the OLRF ecosystem may therefore be methodological rather than technical: it provides the analytical method through which legal professionals can perform the sub-normative analysis that Decision Tree authoring requires, without needing to work directly in the OLRF’s technical format. The authoring tool described in Chapter 16 should integrate Rulemapping’s visual methodology as an analysis layer that precedes and structures the formal specification process19.

The Network Effect: Why Connection Creates Value

The docking patterns are not merely technical integration mechanisms. They are the means by which the OLRF achieves the network effects that make it valuable as infrastructure rather than merely as a specification.

Each project that connects through the Connector Pattern adds a normative domain to the Registry. Each normative domain in the Registry becomes available for validation under the Validator Pattern. Each agent that is validated accumulates the performance record that certification under the Certifier Pattern requires. Each certified agent expands the pool of qualified actors that authorities can deploy. Each deployment generates Decision Packages that strengthen the evidentiary base for judicial review, legislative oversight, and civil society scrutiny.

The network effect operates in the opposite direction as well. Each jurisdiction that requires OLRF conformance for its automated governance systems creates demand for certified agents. Each demand signal for certified agents creates an incentive for existing projects to connect through the Validator and Certifier Patterns. Each connection deepens the ecosystem and increases the value of the Registry as a shared normative infrastructure20.

This is why the OLRF’s ecosystem strategy is connection rather than replacement. Replacement would require each existing project to abandon its investment and start over. Connection leverages existing investment and adds a layer of value that each project cannot achieve alone: public promulgation, constitutional anchoring, cross-jurisdictional visibility, structured accountability, and formal agent qualification. The existing projects bring their domain expertise, their production experience, and their established communities. The OLRF brings the constitutional infrastructure that makes their work publicly accountable and cross-jurisdictionally interoperable.

The result, if the connection strategy succeeds, is not a single monolithic system. It is a federated ecosystem of diverse projects and approaches, connected through a shared normative layer, governed by common constitutional principles, and subject to democratic oversight through the Registry, the Coverage Map, and the agent certification system. That is the vision of the OLRF as infrastructure: not a system that replaces what exists, but a layer that makes what exists visible, accountable, and interoperable.

The Institutional Precondition: Mutual Recognition, Not Uniformity

The ecosystem connection strategy rests on a principle that must be stated explicitly, because it is easily misunderstood. The OLRF does not require that all existing projects adopt the same internal representation, the same evaluation semantics, or the same technical architecture. It requires that all participating projects be able to express their normative content in a common interchange format that preserves the constitutionally essential structure. The principle is mutual recognition, not uniformity21.

This principle has a direct precedent in European law. The EU’s framework for the recognition of professional qualifications (Directive 2005/36/EC) does not require that all Member States train their professionals in the same way. It requires that each Member State’s training produce professionals whose competence meets a common minimum standard, assessed through a recognition procedure that respects the diversity of national training systems while ensuring the quality of the result. The OLRF applies the same logic to normative specifications: each project may use its own internal format, its own evaluation method, and its own development process. What it must be able to produce is a Decision Tree in the OLRF canonical format that faithfully represents its normative content, published in the Registry with sub-normative linkage and a Coverage Map. The internal diversity is preserved. The external accountability is standardised.

This is the condition under which a fragmented landscape of isolated projects can become a connected ecosystem of interoperable systems, without requiring any project to abandon what works.

Footnotes

  1. The landscape of Law-as-Code and Rules-as-Code projects has been surveyed in: OECD, “Cracking the Code: Rulemaking for Humans and Machines”, OECD Working Papers on Public Governance, No. 42 (Mohun, J. and Roberts, A.), 2020; for the terminological distinction: Rules as Code (the broader category, encompassing any structured representation of rules for machine processing) and Law as Code (the narrower, more demanding category: the structured, authoritative, machine-executable provision of law, as defined in this paper). The OLRF operates in the Law as Code domain.

  2. The fragmentation pattern is well documented in the digital infrastructure literature. Arthur, W. B., “Competing Technologies, Increasing Returns, and Lock-In by Historical Events”, The Economic Journal, Vol. 99, No. 394, 1989, pp. 116 ff., demonstrates that fragmented ecosystems with incompatible standards tend toward lock-in rather than convergence, because the cost of switching increases with the installed base. The OLRF’s connection strategy is designed to avoid this lock-in by offering a shared layer that does not require switching, only bridging.

  3. Merigoux, D., “Scoping AI & Law Projects: Wanting It All is Counterproductive”, CRCL, Vol. 2, Issue 2, 2024, p. 6. Merigoux’s analysis of the LegalRuleML/Akoma Ntoso ecosystem is particularly instructive: “the authors have ended up at reimplementing a whole dedicated software engineering toolchain relying on ad-hoc tools, that do not interoperate with standard software engineering tools.” The OLRF’s architectural response is to build on standard engineering infrastructure (JSON formats, IETF protocols, OpenID credentials, standard version control) rather than constructing an isolated toolchain. The Connector Pattern (this chapter) extends this principle to the ecosystem: existing projects keep their tools, and a translation layer bridges their output to the shared publication infrastructure.

  4. The promulgation problem is the constitutional core of the ecosystem challenge. Fuller, L., The Morality of Law, Yale University Press 1964, pp. 49 ff. (the requirement that law be publicly promulgated) applies not only to the statutory text but to the executable specification through which the statutory text is applied. An executable specification that operates inside a closed administrative IT system, accessible only to the authority’s technical staff, fails this requirement regardless of its internal correctness.

  5. The qualification problem has become urgent as agentic AI systems proliferate in public administration. The European Commission’s “2024 Report on the State of the Digital Decade”, COM(2024) 260 final, documents the growing deployment of AI systems in administrative processes across Member States, while noting the absence of standardised quality frameworks for assessing the normative capabilities of these systems.

  6. The three docking patterns (Connector, Validator, Certifier) represent increasing depth of integration with correspondingly increasing institutional commitment. The pattern is inspired by the integration models used in European digital identity frameworks: eIDAS Regulation (EU) No 910/2014 distinguishes between notified electronic identification schemes (which require the deepest integration) and electronic identification means accepted under bilateral agreements (which require less). The OLRF applies the same graduated integration logic to normative systems and AI agents.

  7. For the specific internal formats: OpenFisca, “Country Package Documentation”, https://openfisca.org/doc/ (Python-based parameter and variable structure); Corsius, M. et al., “RegelSpraak: a CNL for Executable Tax Rules Specification”, Proceedings of the 7th International Workshop on Controlled Natural Language (CNL 2020/21), 2021 (controlled natural language with formal execution semantics); Merigoux, D., Monat, R., and Protzenko, J., “Catala: A Programming Language for the Law”, Proceedings of the ACM on Programming Languages, Vol. 5, ICFP, 2021 (default logic with explicit scope management); Morris, J., “Blawx: A User-Friendly Web-Based Tool for Answerable Legal Automation”, Proceedings of the 18th International Conference on Artificial Intelligence and Law (ICAIL), 2021 (visual block-based interface with s(CASP) reasoning).

  8. RUML (Rulemapping Logic Format): open JSON-based specification, Rulemapping Group, https://rulemapping.org/. RUML was developed as the machine-readable output format of the Rulemapping method created by Prof. Dr. Stephan Breidenbach, originally as a pedagogical tool for visualising legal subsumtion and subsequently developed into a technology for the modelling and automation of complex regulatory processes. The format represents the complete if-then logic of a law, including conditions, consequences, exceptions, and references, organised in the sequence of their legal application rather than in the linear structure of the statutory text. RUML is jurisdiction-independent and domain-independent. The Rulemap Builder (beta) provides a visual, no-code interface for constructing RUML specifications. See: Breidenbach, S., Was Gesetze sein könnten: Mit Methode zum guten Gesetz, C. H. Beck, München 2025; SPRIND, “Rulemapping: Durch skalierbare Regel-KI Bürokratie entlasten”, 2025 (project description and funding rationale).

  9. The proposition that RUML can serve as the analytical front-end for OLRF Decision Tree authoring reflects a division of labour between two complementary tools. The Rulemap Builder excels at making the legal decomposition tractable for legal professionals who are not software engineers: its visual interface allows the professional to see the subsumtion structure as a whole, to identify missing exceptions, to trace cross-references, and to verify that the logical structure matches the statute’s intent. The OLRF’s authoring process excels at adding the constitutional infrastructure that publication requires: sub-normative anchoring, Coverage Map classification, test suite generation, cryptographic signing. Neither tool alone is sufficient. Together, they form a workflow in which the legal analysis and the constitutional publication are separated into distinct, manageable steps. This separation is important because it means that the legal professional does not need to learn the OLRF specification format in order to perform the legal analysis, and the OLRF publication process does not need to replicate the visual analytical capabilities that the Rulemap Builder already provides.

  10. For the specific internal formats and their translation characteristics: OpenFisca, “Country Package Documentation”, https://openfisca.org/doc/ (Python-based parameter and variable structure; translation requires mapping OpenFisca variables to OLRF conditions and OpenFisca parameters to OLRF parameters); Corsius, M. et al., “RegelSpraak: a CNL for Executable Tax Rules Specification”, Proceedings of the 7th International Workshop on Controlled Natural Language (CNL 2020/21), 2021 (controlled natural language with formal execution semantics; translation requires parsing CNL into OLRF condition-effect structure); Merigoux, D., Monat, R., and Protzenko, J., “Catala: A Programming Language for the Law”, Proceedings of the ACM on Programming Languages, Vol. 5, ICFP, 2021 (default logic with explicit scope management; translation requires mapping Catala’s default/exception hierarchy to the OLRF Decision Tree’s exception structure, which is architecturally natural because both systems model the same lex-specialis principle).

  11. Raymond, E. S., The Art of Unix Programming, Addison-Wesley 2003, cited in Merigoux (2024), p. 6. The Unix philosophy’s emphasis on composable, single-purpose tools with standardised interfaces is the engineering antidote to the monolithic toolchain problem that Merigoux identifies in the LegalRuleML ecosystem and that the OLRF must avoid. The OLRF’s component architecture (Registry, Engine, Authoring Tool, Certification Infrastructure, Translation Libraries) is designed so that each component can be developed, deployed, and replaced independently, connected through IETF-standardised APIs rather than through internal coupling.

  12. The three-outcome validation (confirmation, classified deviation, escalation) corresponds to the Model B evaluation architecture described in Chapter 5. The deviation classification system provides the structured vocabulary for describing the relationship between the agent’s reasoning and the tree’s deterministic evaluation: a deviation may be within the validation corridor (the agent’s subsumtion is different from the tree’s but legally defensible), outside the corridor but below the escalation threshold (the deviation is documented and flagged for review), or above the escalation threshold (the case must be referred to a human decision-maker).

  13. The proposition that the Validator Pattern is the operational precursor to formal certification reflects the general principle that certification should be based on demonstrated performance rather than theoretical capability. In aviation: EASA requires a minimum number of flight hours under supervised conditions before a pilot licence is issued (Regulation (EU) No 1178/2011, Part FCL). The Validator Pattern is the OLRF’s equivalent of supervised flight hours: the agent operates under the normative guardrail of the published Decision Tree, accumulating a performance record that demonstrates its capabilities in actual operational conditions.

  14. The market access value of the Certifier Pattern is analogous to the market access value of CE marking in the European Single Market: a product that bears the CE mark can be sold in any Member State without requiring a separate national conformity assessment. The OLRF certification credential functions as the CE mark for normative AI agents: a credential that any OLRF-conformant system in any participating jurisdiction will recognise, eliminating the need for redundant national assessments.

  15. OpenFisca: https://openfisca.org/; PolicyEngine: https://policyengine.org/. PolicyEngine’s partnership with the National Bureau of Economic Research (NBER) for microsimulation demonstrates the population-level analytical capability that the OLRF’s simulation tests are designed to leverage. For the French public administration context: Assemblée Nationale / beta.gouv.fr, “LexImpact”, 2019 (OpenFisca-based legislative impact simulation).

  16. RegelSpraak: Belastingdienst (Dutch Tax Authority), production use since 2019. The ALEF (Agile Law Execution Factory) pipeline integrates RegelSpraak specifications into the Belastingdienst’s operational IT environment. See: Corsius, M. et al., op. cit.; regels.overheid.nl (Dutch government rules publication platform, building on RegelSpraak and RegelRecht).

  17. Merigoux, D. et al., op. cit. Catala’s formal semantics are specified in a peer-reviewed publication, which makes the translation layer’s correctness verifiable in a formal sense: the translation from Catala to OLRF Decision Tree can be proven to preserve the normative structure, because both the source (Catala’s default logic) and the target (OLRF’s canonical format) have precise formal definitions. This is a stronger correctness guarantee than is available for translations from less formally specified systems.

  18. Morris, J., op. cit. Blawx’s support for defeasibility (the ability of more specific rules to override more general rules) and open-world reasoning (the ability to reason about facts that are unknown rather than treating them as false) creates a natural complementarity with the OLRF’s closed-world Decision Tree. The Decision Tree represents the deterministic baseline (what the law requires under fully known facts). The Blawx-based agent represents the exploratory capacity (what the law might require under partially known facts or competing interpretations). The Validator Pattern structures the relationship between the two: the agent explores, the tree validates, deviations are classified.

  19. Breidenbach, S., Was Gesetze sein könnten: Mit Methode zum guten Gesetz, C. H. Beck, München 2025, Kap. 3 (Visualisierung der Entscheidungslogik) und Kap. 5 (Digitale Gesetzgebung als strategischer Prozess). The methodological contribution is significant because the OLRF’s greatest adoption barrier is not technical but analytical: the legal professional must be able to decompose a statutory norm into its conditions, effects, exceptions, parameters, and discretion points before the Decision Tree can be constructed. Rulemapping provides the analytical method for this decomposition. The OLRF provides the formal specification format into which the decomposition is expressed.

  20. The network effect dynamics described here follow the classic analysis of: Katz, M. L. and Shapiro, C., “Network Externalities, Competition, and Compatibility”, The American Economic Review, Vol. 75, No. 3, 1985, pp. 424 ff. The OLRF’s connection strategy is designed to create positive network externalities: each additional participant increases the value of the network for all existing participants, because the Registry becomes richer, the pool of certified agents grows, and the cross-jurisdictional coordination possibilities expand.

  21. The principle of mutual recognition rather than uniformity is foundational to European integration. See: CJEU C-120/78, Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein (Cassis de Dijon), 1979, establishing the principle that goods lawfully produced in one Member State must be admitted to the market of another, unless the importing State can justify a restriction on grounds of public interest. The OLRF applies this principle to normative specifications: a specification lawfully produced in one format must be recognisable through the OLRF interchange format in any participating jurisdiction, without requiring the originating project to adopt a different internal format.