OLRF
Part 4 From Vision to Practice

Chapter 19

Reference Implementations, the Commercial Ecosystem, and the Path to Adoption

Last updated: 2026-04-10 Open for review

“The best way to validate an architecture is not to review it. It is to run it. Documentation describes what you intended. Code describes what you built. Only production tells you what actually matters.” Kelsey Hightower, Principal Engineer, Google Cloud, Keynote at KubeCon North America, 2019

Why Reference Implementations Are Not Optional

A specification without a reference implementation is a hypothesis. It may be internally consistent, constitutionally rigorous, and internationally reviewed, and still contain errors that no amount of abstract analysis can reveal. The gap between a specification that reads correctly and a specification that builds correctly is, in complex technical architecture, consistently larger than its authors expect. Underspecified interfaces, ambiguous semantics, missing error conditions, implicit assumptions about execution environment: none of these are detectable by review alone. They are detectable only by implementation, and only by implementation conducted by teams who did not write the specification and who therefore bring no unconscious interpretive charity to its ambiguities1.

The OLRF’s reference implementation programme is therefore not a demonstration exercise. It is a core component of the specification development process: the mechanism through which the specification’s fitness for purpose is verified against the only criterion that ultimately matters, which is whether a competent engineering team can build a conformant system from it without recourse to the specification’s authors. Every implementation decision that the reference programme forces (every choice about data structure, API contract, error handling, and edge case behaviour) is either confirmed as a correct implementation of the specification’s intent or identified as a gap that must be resolved before the specification can be considered complete.

Reference implementations serve three distinct constituencies whose needs are not identical. Adopting public authorities need implementations that demonstrate the OLRF’s constitutional claims in practice: that the Decision Package genuinely supports the duty to give reasons, that the Discretion Point mechanism genuinely preserves space for human judgment, that the Coverage Map genuinely makes the scope of automation publicly accountable, and that the agent certification system genuinely ensures that only qualified agents perform normative functions. Commercial implementers need implementations that establish the technical baseline for conformance: what a conformant evaluation engine must produce, what the Registry API must accept and return, and where the specification permits implementation variation and where it requires exact conformance. The standardisation process (Chapter 18) needs implementations that demonstrate the specification’s completeness: that every normative claim in the document corresponds to a buildable technical requirement, and that the conformance classes are testable rather than merely aspirational2.

The Reference Implementation Portfolio

The reference implementation portfolio is organised around the three conformance classes defined in Chapter 16, with each class represented by at least one implementation that demonstrates the class’s requirements in a realistic deployment context. The portfolio is not a single monolithic system. It is a family of interoperable components, each independently deployable, each demonstrating a specific layer of the OLRF architecture, and each contributing to the overall evidence base for the specification’s completeness and implementability.

The Class A Reference Implementation (the Registry and Decision Tree publication infrastructure) is the foundational component on which all other implementations depend. It comprises four elements: a Registry server that accepts, validates, stores, and serves Decision Trees according to the OLRF specification; a Decision Tree authoring tool that guides legal professionals through the construction of a conformant Decision Tree from a legislative text, integrating the RUML translation layer (Chapter 17) as the analytical front-end for the sub-normative decomposition; a validation service that checks a submitted Decision Tree for structural correctness, sub-normative linkage completeness, and Coverage Map coverage (including model assignment and certification requirement for each element) before accepting it for Registry publication; and the translation libraries for existing formats (RUML, OpenFisca, RegelSpraak, Catala) that enable the Connector Pattern described in Chapter 17.

The Class A implementation is the OLRF component with the broadest potential adoption, because it can be deployed alongside existing automated governance systems without requiring their replacement. An authority that implements Class A for a specific norm (publishing a compliant Decision Tree in the Registry alongside whatever software currently applies that norm) has taken the most constitutionally significant single step available to it: it has made the normative basis of its automated decisions publicly verifiable. The Class A implementation is therefore designed for accessibility as much as for technical completeness: the authoring tool must be usable by legal professionals who are not software engineers, the validation service must produce human-readable error messages that identify specific deficiencies in specific norm elements, and the Registry server must be deployable on standard public sector infrastructure without specialist operations expertise3.

The Decision Tree authoring tool deserves particular attention because it is the implementation component that most directly determines whether Class A adoption is achievable at scale. The sub-normative linkage process (tracing every element of the Decision Tree to the specific sentence, clause, or sub-clause of the legislative text from which it derives) is intellectually demanding work that requires deep familiarity with both the applicable norm and the OLRF’s anchor type system. The authoring workflow described in Chapter 17 addresses this by separating the legal analysis (performed in the Rulemap Builder, producing a RUML specification) from the constitutional publication (performed in the OLRF authoring tool, adding sub-normative anchors, Coverage Map classifications, and test cases). This separation ensures that the legal professional does not need to learn the OLRF specification format in order to perform the legal analysis, and that the OLRF publication process can build on the structural decomposition that RUML already provides4.

The Class B Reference Implementation adds the evaluation engine: the normative evaluation service that receives fact inputs through the MCP interface, evaluates them against the Registry-published Decision Tree, and returns a signed Decision Package. The evaluation engine is the OLRF component with the highest constitutional stakes: it is the system that applies the law to facts, and its correctness (its fidelity to the Decision Tree’s normative specification) determines whether every citizen determination it produces is legally defensible. The Class B reference implementation therefore prioritises correctness over performance: it is built to be formally verifiable against the Decision Tree specification, not optimised for throughput. Performance optimisation is the domain of commercial implementations; the reference implementation’s function is to establish the normative baseline against which commercial implementations’ conformance is assessed.

The Class B implementation also includes the complete MCP tool suite described in Chapter 8 (all five tools, implemented to the full specification), comprehensive audit logging, the four-register explanation capability, and the agent certification verification at Control 5. Under Model B, it additionally includes the validation framework that checks Legal Agent subsumtion against the Decision Tree, the deviation classification system, and the escalation mechanism for deviations that exceed defined thresholds5.

The Class C Reference Implementation adds the AI fact-finding pipeline, the agent coordination layer, and the full three-model architecture including Model C’s autonomous reasoning with retrospective audit. It is the most complex component of the portfolio and the one whose development most directly tests the OLRF’s architectural claims about the separation principle and the agent certification system.

The Class C implementation is developed in two phases. The first phase implements the MCP fact-finding pipeline (a set of specialised AI agents for document extraction, register querying, and confidence-scored fact assembly) and demonstrates their operation against Class B evaluation infrastructure, including the certification verification at Control 5 for fact-finding agents. The second phase implements the A2A coordination layer, including the Discretion Point escalation and reintegration cycle, the cross-jurisdictional norm evaluation workflow, the composite audit trail generation, and the full Model C architecture with certified autonomous reasoning agents and retrospective audit. The two-phase approach ensures that the fundamental separation principle is demonstrated before the complexity of multi-agent orchestration is added6.

Pioneer Implementation Jurisdictions

The reference implementation portfolio is developed in the context of pioneer implementation jurisdictions: public authorities that commit to deploying the OLRF for specific norms in their production environment, contributing real-world validation that laboratory implementations cannot provide. The selection of pioneer jurisdictions is one of the most consequential decisions in the implementation programme, because it determines which legal systems, which administrative traditions, and which norm types provide the initial evidence base for the specification’s adequacy.

The selection criteria balance four considerations. Norm diversity requires that the pioneer portfolio includes norms of different types (social welfare entitlements, tax assessments, regulatory licensing, enforcement decisions), because the OLRF’s architecture must be adequate for the full range of administrative norm application, not merely for the most tractable subset. Jurisdictional diversity requires that the portfolio includes authorities from different EU member states and, where possible, non-EU jurisdictions, because the specification must accommodate different legal traditions, different administrative procedure frameworks, and different constitutional requirements. Institutional scale diversity requires that the portfolio includes both large national authorities (whose norm volumes provide the stress testing that demonstrates performance adequacy) and smaller municipal or regional authorities (whose resource constraints reflect the adoption conditions that the majority of potential adopters will face). Norm complexity diversity requires that the portfolio includes both relatively simple norms (threshold-based benefit entitlements that can be modelled in a few dozen Decision Tree nodes) and highly complex norms (multi-factor licensing assessments with nested exceptions, cross-referenced definitions, and discretionary elements that test every aspect of the OLRF’s architecture)7.

Pioneer implementations enter through the conformance class that matches their institutional readiness. An authority with an existing automated governance system enters through Class A (the Connector Pattern: publishing a conformant Decision Tree alongside its existing system). An authority that is building new infrastructure enters through Class B (implementing the evaluation engine and MCP interface from the outset). An authority with established AI capabilities and the institutional maturity to manage agent certification enters through Class C. No authority is required to begin at Class C. Every authority is encouraged to begin at Class A, because Class A is where the most constitutionally significant improvement (normative transparency) is achieved at the lowest institutional cost.

The pioneer programme provides three forms of support. Technical support includes access to the reference implementation components, integration assistance from the Linux Foundation Europe project team, and direct engagement with the specification authors for gap identification and resolution. Institutional support includes model governance frameworks (template processes for Decision Tree lifecycle management, model pre-publication review procedures, and draft legal authority instruments that pioneer authorities can adapt to their jurisdictional context). Financial support includes co-funding for the dedicated digital legislation units that Chapter 16 identifies as a critical institutional prerequisite, recognising that the capacity to author conformant Decision Trees does not exist in most authorities today and must be deliberately built8.

The Commercial Ecosystem

The OLRF is designed not merely to be implemented by public authorities. It is designed to create a commercial ecosystem in which private actors compete to serve the public infrastructure that the OLRF defines. The architecture of that ecosystem matters as much as the architecture of the specification, because the wrong commercial structure can produce dependencies that undermine the sovereignty and democratic accountability that the OLRF is designed to secure.

The commercial ecosystem has four market segments, each with distinct competitive dynamics and distinct sovereignty implications9.

The evaluation engine market is the segment most directly connected to the OLRF’s constitutional core. An evaluation engine is the software that takes a set of validated facts and a Registry-published Decision Tree and produces a signed Decision Package. It is the system that applies the law to the citizen’s case. Its correctness is therefore constitutionally non-negotiable: an evaluation engine that produces an incorrect result has produced an unlawful administrative act.

The evaluation engine market favours specialist excellence over platform scale. Building a correct, conformant, formally verifiable evaluation engine is not a scale play. It is an expertise play, where a team of twenty people with the right legal informatics background can produce a system that outperforms a team of two hundred generalists. The OLRF’s conformance certification framework reinforces this dynamic: by making the quality of evaluation (normative correctness, auditability, formal verifiability) the primary competitive dimension rather than throughput or integration breadth, it creates a market where specialist excellence is the differentiator10.

The sovereign dimension of the evaluation engine market is particularly acute. A public authority that uses a commercially operated evaluation engine is entrusting its normative application function to a system whose source code it does not control and whose continued availability is subject to the vendor’s commercial decisions. The OLRF’s response is layered. The open reference implementation (available under the Linux Foundation Europe project) means that no authority is ever entirely dependent on commercial availability. The conformance certification requirement means that any certified engine can in principle replace any other, because the Decision Trees and the signed Decision Packages they produce are format-standardised rather than vendor-specific. And the explicit encouragement of public sector capacity development means that authorities can, over time, build the internal expertise to operate their own conformant evaluation infrastructure rather than depending permanently on commercial vendors.

The Registry-as-a-Service market is the segment most directly relevant to adoption barriers for smaller authorities. Many public authorities, particularly at municipal and regional level, lack the technical infrastructure and operations expertise to run their own Registry instances. A hosted Registry service removes a significant adoption barrier, but it also creates a concentration risk whose sovereign implications must be addressed in the market’s design rather than left to competition policy.

Three structural requirements are built into the Registry conformance specification to address this risk. Migration portability: a hosted Registry provider must export a complete, cryptographically verifiable copy of a client authority’s entire Registry history on request, in a format that any conformant Registry implementation can import. Jurisdictional data sovereignty: a Registry instance holding the Decision Trees of a public authority must store and process that data within the jurisdiction’s legal data governance framework, regardless of the vendor’s infrastructure geography. Federated query interoperability: any conformant Registry instance must answer cross-Registry queries from any other conformant instance without requiring administrative arrangements between their operators, ensuring that an authority can migrate between Registry providers without disrupting cross-jurisdictional workflows11.

The AI agent market encompasses three tiers that correspond to the three models. Under Model A, the market provides fact-finding agents: AI systems that extract data from documents, query registers, assemble facts into the DataPoint Schema, and submit them for deterministic evaluation. Under Model B, the market provides Legal Agents: AI systems that perform subsumtion, validated against the Decision Tree, with domain-specific Model B certification (Chapter 10). Under Model C, the market provides autonomous reasoning agents: AI systems that reason directly from statutory text, with domain-specific Model C certification and retrospective audit.

The architectural separation principle is the primary structural defence against market consolidation in this segment. Because the MCP interface makes the normative evaluation layer completely independent of the AI agent layer (a fact-finding agent’s only relationship to the normative layer is through the typed, schema-validated DataPoint submission described in Chapter 8) the AI agent market is structurally modular. A hyperscaler that dominates the AI fact-finding market for unemployment benefit applications cannot, through that dominance, gain any foothold in the normative evaluation of those applications. The Decision Tree evaluates the facts submitted by whoever submitted them; it does not care whether those facts were assembled by a hyperscaler agent or a start-up agent, as long as they conform to the DataPoint schema and the agent holds the required certification. The normative layer is permanently and architecturally sovereign, regardless of who wins the competition in the agent layer above it12.

This separation creates genuine market space for specialised start-ups and innovators. A start-up that develops superior document extraction capabilities for a specific document type (medical certificates for disability benefit applications, for example, or building permits for planning enforcement) can compete on that specific capability without needing to provide a complete end-to-end automated governance platform. The MCP interface’s modular tool architecture means that a specialised fact-finding agent for one document type can coexist with a different specialised agent for another document type within the same Class C workflow, without requiring the authority to commit to a single vendor’s end-to-end solution. This modularity is the commercial form of the sovereignty argument: it keeps the market permanently open for specialised excellence13.

The agent certification system (Chapter 10) adds a further dimension to the AI agent market. Certification is both a quality assurance mechanism and a market access credential. An agent that holds an OLRF certification for a specific normative domain under a specific model can operate in any jurisdiction that publishes OLRF-conformant Decision Trees, subject to the Coverage Map’s certification requirements. The certification creates a portable qualification that reduces the procurement friction for deploying agents across authorities and jurisdictions. For commercial agent providers, certification is an investment in market access. For open-source agent developers, certification is a credibility signal that enables public sector deployment without the brand recognition that established vendors enjoy. The certification market must therefore remain open to both: public test suites, equal assessment standards, and graduated fee structures that do not exclude smaller actors (Chapter 10).

The orchestration and integration market provides the workflow infrastructure for complex administrative processes that span multiple norms, multiple agents, and multiple jurisdictions. Orchestration at scale does benefit from the infrastructure advantages that large providers have: the reliability engineering, the global network infrastructure, and the operational expertise required to run legally consequential workflows at high volume are real barriers to entry for small providers. The OLRF’s response is not to pretend these barriers do not exist, but to ensure that the orchestration layer’s conformance requirements prevent market concentration from translating into normative layer control. An orchestration provider that handles workflows across multiple jurisdictions queries each jurisdiction’s Registry independently through the open federation protocol, submits facts to each jurisdiction’s evaluation engine through the MCP interface, and assembles Decision Packages from each evaluation without exercising any discretion over their normative content. Market concentration in orchestration is a competition policy problem; it is not, under the OLRF architecture, a sovereignty problem14.

Coalition Architecture & Actor Groups

The OLRF’s path to adoption depends on a coalition of actors whose interests, incentives, and institutional roles are distinct. A communication strategy that addresses all five with the same message will persuade none of them effectively. A coalition-building strategy that sequences engagement to use each group’s adoption to reduce the barriers for the next is the approach this chapter describes15.

Pioneering public authorities are the coalition’s foundation. Without at least a small number of public authorities willing to implement the OLRF for specific norms in their production environments, the reference implementation programme has no operational validation, the conformance class architecture has no demonstrated cases, and every subsequent adopter faces the uncertainty of being among the first rather than the confidence of following established practice.

The interests that motivate pioneering authority participation are primarily institutional rather than political. An authority operating automated governance systems at scale faces growing exposure (from administrative courts, from data protection authorities, from parliamentary oversight) to demands for normative transparency that it currently cannot satisfy. Class A adoption is the least costly response to this exposure that genuinely satisfies its requirements: publishing a conformant Decision Tree costs less, institutionally, than defending against a judicial challenge that the authority cannot adequately answer because it cannot produce a legally adequate account of its automated system’s normative basis. The OLRF’s engagement with pioneering authorities therefore begins not with the architecture’s constitutional vision but with its immediate institutional value: it reduces the authority’s accountability exposure in a way that its existing systems do not16.

Administrative courts and constitutional review bodies are the coalition’s most powerful indirect actors: the institutions whose jurisprudence creates the legal environment in which the OLRF’s value proposition is most compelling. An administrative court that rules that an automated determination without a structured normative record does not constitute a legally adequate statement of reasons has created, in a single judgment, a compliance requirement for every automated governance system in its jurisdiction. A constitutional court that finds that a proprietary implementation of a legislative norm cannot be subjected to adequate judicial review has created a sovereignty argument whose political force exceeds anything that a white paper can generate.

The OLRF’s engagement with courts is not advocacy. It is education. Judicial seminars on the constitutional dimensions of automated governance, academic publications by administrative law scholars, and participation in cases that raise the accountability questions the OLRF addresses are the instruments through which the OLRF’s constitutional framework enters the judicial conversation. The goal is not to persuade courts to rule in ways that favour the OLRF. It is to ensure that when courts examine the accountability architecture of automated governance systems, they have access to the analytical framework that this paper has developed17.

National digital sovereignty programmes are the coalition’s political infrastructure. The ministries, agencies, and inter-ministerial initiatives across EU member states that are building the European digital sovereignty agenda have the budgets, the legislative access, and the inter-governmental coordination mechanisms that adoption at scale requires. The OLRF’s engagement with these programmes positions it as a component of the sovereignty programme they are already building: as the legal infrastructure layer of the European digital sovereignty agenda, rather than as a separate initiative that competes for political attention and budget18.

Civil society organisations and affected citizen communities are the coalition’s democratic legitimacy base. The OLRF’s constitutional argument is ultimately an argument about citizens’ rights: the right to know why a public authority’s automated system decided against them, the right to challenge that decision in a court capable of genuinely reviewing it, and the right to live under a governance system whose machine-executable legal specifications are publicly accountable to democratic institutions. These are the daily concerns of the organisations that represent benefit claimants, tax debtors, licence applicants, and enforcement subjects across Europe.

Civil society engagement serves two functions in the coalition architecture. The first is democratic legitimacy: the OLRF’s adoption by public authorities will face less political resistance if it is supported by the organisations that represent the people those authorities serve. The second is quality assurance: civil society organisations are the actors most likely to identify gaps, omissions, and biases in published Decision Trees and Coverage Maps, because they work daily with the people whose cases reveal the system’s deficiencies. The pre-publication review period (Chapter 15) and the civil society scrutiny function (Chapter 15) are designed to give these organisations a formal, meaningful role in the OLRF’s governance, not merely a consultative one19.

The existing Law-as-Code and automated administration community is the coalition’s technical base. The projects described in Chapter 17 (OpenFisca, RegelSpraak, RegelRecht, Catala, Blawx, PolicyEngine, Rulemapping, and the growing number of agentic administration projects across European public administrations) are not competitors to be displaced. They are partners to be connected. Their domain expertise, their production experience, their institutional relationships, and their established communities are assets that the OLRF cannot replicate and should not try to. The three docking patterns (Connector, Validator, Certifier) are the OLRF’s offer to this community: connect your existing work to a shared normative infrastructure, and gain public promulgation, constitutional anchoring, cross-jurisdictional visibility, and formal agent qualification in return. The coalition succeeds when the existing community sees the OLRF not as a rival standard but as the constitutional layer that makes their technical work democratically accountable20.

Political Tailwind

The distance between the current governance environment and the OLRF’s normalisation phase is substantial. It will not be covered quickly, uniformly, or without significant institutional friction. But three forces in the current political environment make the path more tractable than it might otherwise appear.

The regulatory environment is creating compliance costs for public authorities that the OLRF directly reduces. The EU AI Act’s requirements for transparency, human oversight, accuracy, and risk management in high-risk AI systems apply to every automated governance system that affects individual rights. The Interoperable Europe Act’s requirements for digital interoperability in the public sector apply to every cross-border administrative process. National administrative procedure law reforms (triggered by the growing deployment of automated administrative decision-making) are creating new documentation and accountability requirements across jurisdictions. The OLRF does not add to these compliance burdens. It satisfies them by construction: a Class B implementation that produces signed Decision Packages, publishes Coverage Maps, operates certified agents, and preserves a complete audit trail is a system that meets the AI Act’s requirements for transparency, human oversight, and documentation without needing a separate compliance layer21.

The judicial environment is creating liability exposure for authorities whose automated systems cannot demonstrate normative correctness. Administrative courts across Europe are increasingly willing to scrutinise the normative basis of automated decisions. The Dutch SyRI judgment, the ongoing consequences of the Toeslagenaffaire, and the growing body of German administrative law scholarship on the reviewability of automated administrative acts (§35a VwVfG) are creating a judicial environment in which an authority that cannot produce a structured, verifiable account of how its automated system applies the law faces increasing legal risk. The OLRF’s Decision Package is the most complete answer to that risk that the current state of the art can provide22.

The competitive environment is creating a sovereignty urgency that gives digital sovereignty arguments a political salience they have not previously enjoyed. The entry of major technology platforms into the European public services market, the concentration of foundation model provision among a small number of non-European actors, and the growing awareness that AI-mediated public administration creates dependencies that are qualitatively different from conventional IT procurement are generating political demand for sovereign alternatives. The OLRF positions itself as the constitutional infrastructure for that sovereignty: not a rejection of commercial participation, but a framework that structures commercial participation so that it strengthens the commons rather than enclosing it23.

These three forces (regulatory compliance pressure, judicial accountability pressure, and sovereignty pressure) are the political tailwinds on which the OLRF’s adoption coalition must be built. They are not generated by the OLRF. They exist independently of it. The OLRF’s strategic contribution is to provide the architectural answer to problems that these forces have already made visible. The adoption path succeeds when the OLRF is understood not as a new initiative seeking political support, but as the technical and constitutional solution to problems that democratic institutions have already identified and that they are already, with increasing urgency, demanding that public administration solve.

Conclusion: The Path Can Be Walked

This chapter, and Part IV as a whole, has addressed the question that every serious proposal for institutional change must eventually face: not whether the vision is worth pursuing, but whether the path to it can be walked.

The answer is structured around four convictions. First, that a graduated adoption model (the three conformance classes) makes the path accessible to authorities at every level of institutional readiness, because Class A is achievable with existing systems and delivers the most constitutionally significant improvement (normative transparency) at the lowest cost. Second, that connecting the existing ecosystem (the three docking patterns) is strategically superior to replacing it, because it leverages decades of domain expertise and production experience rather than demanding that communities start over. Third, that a layered standardisation strategy (six organisations, each governing the component best suited to its expertise) provides the international legitimacy that sovereign legal infrastructure requires without accepting the slowest organisation’s timeline for every component. And fourth, that a structured commercial ecosystem (evaluation engines, Registry services, certified agents, orchestration) can produce the sovereign infrastructure that democratic governance demands, provided that the rules of competition are set by the architecture’s constitutional commitments rather than by market power.

The reference implementation programme, the pioneer jurisdiction support structure, the commercial ecosystem design, and the coalition architecture together constitute the operational translation of the OLRF’s constitutional vision into institutional reality. They do not guarantee success. No institutional strategy can. What they do is make the path visible, the sequence clear, the prerequisites identifiable, and the progress measurable.

The OLRF began with a constitutional observation: that the growing automation of public administration threatens to hollow out the democratic accountability on which legitimate governance depends. It responded with an architectural proposal: a framework in which machine-executable law remains transparent, reviewable, contestable, and subject to democratic control. Part IV has addressed the final question: how that framework can move from proposal to infrastructure. The answer is not a single bold stroke. It is a structured, graduated, coalition-based path that begins with the simplest meaningful step (publishing a Decision Tree in the Registry) and deepens as institutional capacity, commercial ecosystem, and political commitment develop.

That path can be walked. The question is no longer architectural. It is political: whether the democratic institutions that are already demanding accountability from automated governance will invest in the infrastructure that makes accountability possible.

Footnotes

  1. The gap between specification and implementation is well documented in software engineering. Brooks, F., The Mythical Man-Month: Essays on Software Engineering, Addison-Wesley 1975 (anniversary edition 1995), pp. 142 ff., observes that the second system is the most dangerous: the designers, having learned from the first, produce a specification that is more ambitious than any implementation can fulfil. The OLRF’s reference implementation programme is designed to close this gap before the specification is finalised, not after.

  2. The three-constituency model for reference implementations (public authorities, commercial implementers, standardisation process) follows the pattern established by W3C’s Web standards development, where reference implementations serve simultaneously as conformance baselines for implementers, as demonstration tools for adopters, and as completeness evidence for the standards process. See: W3C, “W3C Process Document”, Section 6.2 (Implementation Experience), 2023.

  3. The accessibility requirement for the Class A implementation reflects the principle that the barrier to constitutional compliance should be institutional readiness rather than technical capability. An authority that has the legal expertise to perform the sub-normative analysis should not be blocked from Class A adoption by the absence of specialist software engineering capacity. The authoring tool is designed to make this principle operational.

  4. The integration of the RUML translation layer as the analytical front-end for the OLRF authoring tool is described in Chapter 17, fn. 7. For the Rulemap Builder: Rulemapping Group, https://rulemapping.org/ (beta, visual no-code interface for constructing Rulemaps that can be translated into OLRF Decision Trees).

  5. The correctness-first design philosophy for the Class B reference implementation follows the principle that reference implementations establish the normative baseline, not the performance baseline. See: Lamport, L., “Proving the Correctness of Multiprocess Programs”, IEEE Transactions on Software Engineering, Vol. SE-3, No. 2, 1977, pp. 125 ff. (arguing that correctness must be established before optimisation, because optimisation of an incorrect system merely produces incorrect results faster).

  6. The two-phase development of the Class C reference implementation ensures that the separation principle (the most fundamental architectural commitment of the OLRF) is demonstrated independently of the multi-agent orchestration complexity. If the separation principle fails (if an AI agent can, through any path, modify a Decision Tree, bypass the evaluation engine, or produce a valid Decision Package without certification verification), the entire constitutional argument collapses. Testing this principle in isolation, before adding orchestration complexity, is therefore a constitutional necessity rather than a development convenience.

  7. The selection criteria for pioneer implementations follow the diversity requirements established for clinical trials in pharmaceutical regulation: a trial population that is too homogeneous produces results that cannot be generalised to the broader population. See: ICH, “Guideline E5: Ethnic Factors in the Acceptability of Foreign Clinical Data”, 1998 (requiring that clinical trials include populations representative of the intended deployment context). The OLRF applies the same principle to normative infrastructure: pioneer implementations that are too homogeneous (only tax norms, only large national authorities, only one jurisdiction) produce evidence that cannot support claims of general adequacy.

  8. The co-funding model for digital legislation units reflects the recognition that institutional capacity, not technical infrastructure, is the binding constraint on OLRF adoption. An authority that receives a free Registry server but lacks the legal expertise to author a conformant Decision Tree cannot achieve even Class A. The pioneer support programme therefore prioritises capacity investment over infrastructure provision.

  9. The four-segment market structure (evaluation engines, Registry services, AI agents, orchestration) reflects the OLRF’s architectural layering: each market segment corresponds to a distinct architectural layer, and the separation between layers prevents market power in one segment from translating into control over another. This structural defence is deliberate. It follows the principle established in telecommunications regulation that infrastructure markets must be structured to prevent vertical integration from undermining competitive dynamics. See: Regulation (EU) 2018/1972 (European Electronic Communications Code), recitals 3 ff.

  10. The proposition that the evaluation engine market favours specialist excellence over platform scale is an empirical claim that the pioneer implementations will test. If it proves correct, the OLRF’s commercial ecosystem will be characterised by a diverse population of specialist evaluation engine providers, each serving a cluster of normative domains. If it proves incorrect (if evaluation engine development turns out to require platform-scale investment), the open reference implementation provides the sovereign fallback: no authority is ever dependent on commercial availability.

  11. The three structural requirements for the Registry-as-a-Service market (migration portability, jurisdictional data sovereignty, federated query interoperability) are designed to prevent the Registry market from replicating the vendor lock-in dynamics that characterise the current cloud computing market. See: European Commission, “Switching Cloud Service Providers”, Staff Working Document, SWD(2023) 233 final.

  12. The architectural separation between the AI agent layer and the normative evaluation layer is the OLRF’s most important commercial design feature, because it ensures that the AI market (where hyperscaler advantages are greatest) cannot capture the normative market (where sovereignty requirements are strictest). This separation does not prevent hyperscalers from participating in the AI agent market. It prevents their participation from producing normative layer dependencies.

  13. The modularity of the MCP interface’s tool architecture means that the AI agent market can support specialisation at an unusually granular level. A start-up that excels at extracting income data from payslips competes with a different start-up that excels at extracting medical data from clinical reports, and both can coexist within the same Class C workflow. This granularity is unusual in the current AI market, where end-to-end platform solutions are the dominant competitive model. See: Iansiti, M. and Lakhani, K., “Competing in the Age of AI”, Harvard Business Review, January-February 2020, pp. 60 ff.

  14. The distinction between a competition policy problem (market concentration in orchestration) and a sovereignty problem (commercial control over the normative layer) is constitutionally important. Competition policy problems can be addressed through existing regulatory instruments (Art. 102 TFEU, Digital Markets Act). Sovereignty problems require architectural solutions, because no amount of competition enforcement can restore sovereignty over a normative layer that is architecturally accessible to commercial actors. The OLRF ensures that sovereignty problems are solved architecturally, leaving competition problems to competition policy.

  15. The five-actor coalition model follows the analytical framework of: Kingdon, J., Agendas, Alternatives, and Policies, 2nd edn., Pearson 2003, pp. 165 ff. (the “policy window” model, in which adoption occurs when problem recognition, policy solutions, and political conditions align). The OLRF’s coalition strategy is designed to align these three streams: courts and civil society generate problem recognition, the OLRF provides the policy solution, and sovereignty programmes and pioneering authorities provide the political conditions.

  16. The proposition that Class A adoption is the least costly response to growing accountability exposure reflects a rational-institutional analysis: authorities adopt the OLRF not because they are persuaded by its constitutional vision, but because the alternative (defending against judicial challenges without a structured normative record) is more costly. This is a feature, not a limitation, of the adoption strategy: adoption driven by institutional self-interest is more durable than adoption driven by political enthusiasm, because the incentive structure persists regardless of changes in political leadership.

  17. The educational engagement with courts follows the model established by the German Richterakademie and the European Judicial Training Network (EJTN), which provide structured continuing education for judges on topics of legal development. The constitutional dimensions of automated governance (the reviewability of algorithmic decision-making, the adequacy of AI-generated reasons, the scope of §35a VwVfG) are topics that judicial education programmes are already beginning to address. See: EJTN, “Judicial Training Principles”, 2016, Principle 10 (training on new areas of law and legal development).

  18. The positioning of the OLRF as a component of existing sovereignty programmes, rather than as a competing initiative, is strategically important because digital sovereignty budgets are already allocated to cloud infrastructure (Gaia-X), digital identity (EUDI Wallet), semiconductor production (European Chips Act), and AI capability (various national AI strategies). The OLRF must demonstrate that it serves the same strategic objective (sovereign digital infrastructure) in a domain (legal infrastructure) that the existing programmes have not yet addressed. The argument of Chapter 14 provides the constitutional basis for this positioning.

  19. The formal role of civil society in the OLRF’s governance reflects the principle that democratic infrastructure must be democratically governed. See: Fung, A. and Wright, E. O., “Deepening Democracy: Innovations in Empowered Participatory Governance”, Politics and Society, Vol. 29, No. 1, 2001, pp. 5 ff. (arguing that participatory governance mechanisms are most effective when they give affected communities formal decision-making power rather than merely consultative access).

  20. The proposition that existing Law-as-Code projects are partners rather than competitors is the strategic core of Chapter 17. The success of the coalition depends on the existing community’s perception of the OLRF: if it is perceived as a rival standard that demands replacement, adoption will be resisted. If it is perceived as a constitutional layer that adds value to existing work, adoption will be welcomed. The three docking patterns are designed to make the second perception the correct one.

  21. The compliance-by-construction argument is the OLRF’s strongest institutional selling point: the architecture was designed, from the outset, around the constitutional principles that the AI Act and the Interoperable Europe Act codify. A system built to these principles does not need a separate compliance layer. See: Chapter 13 (the convergence thesis).

  22. Rechtbank Den Haag, C/09/550982, ECLI:NL:RBDHA:2020:1878 (SyRI judgment); Parlementaire ondervragingscommissie Kinderopvangtoeslag, “Ongekend Onrecht”, Tweede Kamer 2020; Braun Binder, N., “Vollautomatisierte Verwaltungsverfahren, Verwaltungsakte und gerichtliche Kontrolle”, NVwZ 2019, S. 1321 ff.

  23. Stanford HAI, “Artificial Intelligence Index Report 2025”, Chapter 1 (industry concentration); European Commission, “2024 Report on the State of the Digital Decade”, COM(2024) 260 final, pp. 28 ff. For the sovereignty argument specifically: Nemitz, P., “Constitutional Democracy and Technology in the Age of Artificial Intelligence”, Philosophical Transactions of the Royal Society A, Vol. 376, No. 2133, 2018, pp. 1 ff.