OLRF

The Rules That Run the World

Introduction

“When there are disputes among persons, we can simply say: Let us calculate --- Calculemus --- without further ado, to see who is right.”
Gottfried Wilhelm Leibniz, The Art of Discovery, 1685

There is a question that no constitution yet answers, that no legislature has yet been asked to resolve, and that no court has yet been required to adjudicate --- but that will define the relationship between citizens and their governments for the next centuries:

When a machine makes a decision that changes a life, whose law is it acting on?

Not whose software. Not whose server. Whose law. This paper is about finding answers to that question.

The World That Is Already Here

We tend to speak of artificial intelligence and autonomous systems as if they were arriving. They have arrived. Across Europe and beyond, algorithms already determine which benefit applications receive priority review, which buildings get permits, which tax returns are flagged for audit, which risk scores follow individuals through social systems. This is not a failure of governance. Much of it is legitimate, necessary, and even desirable. Modern states administer tens of millions of individual decisions each year. Without automation, they cannot function, and in not functioning they lose the trust on which democratic government depends.

The question is not whether machines will be involved in governance. They already are. The question is whether they will be governed themselves: whether the rules they apply are the rules that were actually enacted, whether the decisions they produce are the decisions the law intends, and whether the citizens affected can understand, challenge, and ultimately trust what happened to them.

Today, the honest answer to all three questions is: often not. Laws are published as narrative texts. They are then re-interpreted, re-implemented, and re-encoded, independently, by hundreds of different software systems across dozens of jurisdictions, each time with subtle differences, each time in ways that are opaque to the people they affect and to the courts that are supposed to review them. The same norm produces different outcomes depending on which system applies it. The same citizen, in the same situation, receives different treatment depending on which municipality processes their case. Not because the law is different. Because the implementations are1.

This is the governance crisis of the digital age. It is not primarily a crisis of bad actors or malicious algorithms. It is a structural crisis: the gap between law as it is written and law as it is executed has grown so wide that the rule of law itself is at risk, not by dramatic rupture, but by a slow, distributed, invisible erosion.

The Agent Revolution

The crisis described above involves software systems that process information and produce outputs within defined workflows, with humans nominally available for review. What is now emerging is something categorically different. It is not a faster version of the same thing. It is a new kind of thing entirely, and the constitutional order has not yet begun to reckon with what it means.

An AI agent is not a faster form of software. It is a different kind of actor. It maintains state across extended timeframes. It pursues goals rather than executing instructions. It can invoke other agents, query external systems, draft documents, send communications, and commit to actions, all without a human decision at each step. When multiple agents are deployed together in a coordinated system, the emergent complexity of their interactions exceeds what any individual engineer designed, and frequently exceeds what any individual engineer can reconstruct after the fact2.

But the constitutional significance of agents does not lie primarily in their technical capabilities. It lies in the functions they are beginning to exercise. Consider what is already happening, and what the deployment trajectories of the next three to five years will make routine.

Agents are beginning to subsume. They classify facts under legal concepts: this income falls under this tax bracket, this building violates this zoning provision, this applicant satisfies these eligibility criteria. Subsumtion is the core skill of legal professional training. It is the operation for which the state requires the Befähigung zum Richteramt before a human is permitted to perform it in a legally consequential context. Agents are performing this function today, in pilot projects across European public administrations, without any qualification requirement at all.

Agents are beginning to coordinate autonomously. A citizen reports a change of circumstance, and an agent ecosystem propagates the legal consequences across multiple administrative systems: updating the tax register, recalculating benefit entitlements, notifying the health insurer, adjusting the pension contribution, triggering a housing allowance reassessment. Each step is taken by a specialised agent that queries a different register, applies a different norm, and produces a partial outcome that feeds into the next agent’s input. No human is involved in any individual step. The citizen receives a single consolidated notification. If anything in the chain was wrong, identifying which agent, at which step, applying which version of which rule, produced the error, will be an investigative challenge of the first order. Without the architecture this paper proposes, it will frequently be an unsolvable one.

Agents are beginning to become the actors the citizen encounters. The social worker, the tax clerk, the benefits officer: these are the human faces of the state that citizens have dealt with for centuries. They are being supplemented, and in high-volume processes increasingly replaced, by agents that the citizen never sees, whose reasoning the citizen cannot inspect, and whose qualification no one has assessed. The citizen’s relationship with the state is being mediated by a new class of actors that exercises normative functions with no formal accountability framework, no qualification requirement, and no constitutional status.

This is not a future risk. It is a present condition that is deepening with every procurement decision, every pilot project, and every ministry that deploys an LLM-based system to “assist” its officials in processing applications. The assistance becomes reliance. The reliance becomes dependence. The dependence becomes, in operational terms, delegation of normative authority to a system that no constitutional framework governs3.

The constitutional consequence is direct. If the state requires formal qualification for a human who subsumes facts under a statutory provision, on what constitutional basis does it permit a machine to perform the same function without any qualification requirement at all? If the state insists that every exercise of public authority be traceable through an unbroken chain of accountability to the sovereign people, how is that chain maintained when the actors exercising authority are AI agents whose reasoning is opaque, whose coordination is emergent, and whose qualification has never been assessed?

These are not philosophical questions. They are design questions. They have architectural answers. The architecture that provides those answers is the subject of this paper.

The State That Could Exist.

Before describing the architecture, it is worth pausing to see where it leads. Not what the OLRF does technically (that is the subject of Parts I through III), but what the state looks like when the architecture is in place. The examples that follow are not utopian projections. They are operational consequences of design choices that this paper specifies. Each one becomes possible when law exists not merely as text to be interpreted, but as infrastructure to be executed.

Law that Unfolds Itself

The parliament amends a statute. Within hours, the updated Decision Tree is published in the Registry, signed by the responsible authority, linked sub-normatively to the amended text. Every administrative system that references that tree applies the new version from the moment it enters into force. The gap between legislation and implementation, which today spans months or years (and which produces, in the interim, millions of determinations based on superseded law) collapses to hours. No citizen receives a decision based on an outdated implementation, because there are no outdated implementations. Every implementation references the Registry, and the Registry is always current4.

The Citizen as Principal, not as Petitioner

A citizen has a personal Legal Agent, certified under the OLRF’s agent certification system (Chapter 10), that monitors the Registry on the citizen’s behalf. When a statutory change affects the citizen’s situation, the agent informs the citizen proactively: “The amendment to §X means you now qualify for Y. Shall I submit the application?” The citizen decides. The agent assembles the facts, submits them to the evaluation engine, and the citizen receives a signed Decision Package documenting exactly which norm was applied, which facts were decisive, and which exceptions were considered. The information asymmetry between state and citizen, which today causes millions of valid claims to go unexercised because the citizen did not know they existed, is structurally dissolved. The citizen is no longer a supplicant navigating an opaque bureaucracy. The citizen is a principal, served by a qualified agent, operating on a transparent normative infrastructure5.

Legislation that sees what it does

The parliament considers a reform of the housing allowance. Before the vote, the simulation runs: the proposed Decision Tree is evaluated against an anonymised population of cases drawn from the current operational record. The parliament sees, before it decides, how many households are affected, which income bands gain or lose, where exceptions apply and where they do not, and what the fiscal impact will be. After enactment, the aggregated Decision Packages provide real-time feedback: are the exceptions being used as the legislature expected? Are unintended gaps emerging? Is the deviation rate under Model B within constitutional bounds, or is the validation framework miscalibrated? The feedback loop between legislation and implementation, which today is a years-long, anecdotal process, becomes a structured, evidence-based dialogue between the legislature that enacts the law and the infrastructure that applies it6.

The legislative simulation capability described in this scenario is not hypothetical. The Dutch Ministry of the Interior’s RegelRecht project (Bureau Architectuur Digitale Overheid, MinBZK) has built an open-source proof of concept that demonstrates precisely this function. RegelRecht encodes Dutch legislation in machine-readable YAML specifications, executed by deterministic engines (implemented in both Go and Python, with a Rust engine under development for the MVP). The project’s architecture explicitly includes a simulation environment in which policymakers can test the consequences of proposed legislative changes before they are enacted: the machine-executable specification of the proposed amendment, together with the specifications of related legislation, is evaluated against test data to simulate how the change would affect specific cases and entire populations. The project asks the questions that this scenario presupposes: can we predict the impact of new regulations before implementation? Can we detect conflicts with existing legislation? Can we test different policy options against the same population? Can we identify inconsistencies before they produce harm?7

A family moves from Munich to Amsterdam. Their certified Personal Agent navigates the federated Registry landscape: it identifies the German norms that expire, the Dutch norms that begin, and the EU norms that remain constant. It assembles the facts, submits applications in both jurisdictions through the respective MCP interfaces, and produces a composite audit trail that documents the entire legal transition: which entitlements ended, which began, which required human assessment, and which were processed deterministically. The relocation, which today requires months of bureaucratic correspondence across unconnected administrative systems, becomes a structured transition in which the citizen’s rights in both jurisdictions are transparently and verifiably preserved.

The cross-border scenario described here extends an institutional infrastructure that the European Union has already begun to build, but that currently operates at the level of data exchange rather than at the level of normative interoperability8. The Single Digital Gateway Regulation (Regulation (EU) 2018/1724) requires that 21 administrative procedures (including registration of residence, recognition of academic qualifications, claiming pension benefits, and registering vehicles) be available fully online and accessible to cross-border users across all Member States. The Once-Only Technical System (OOTS), launched in December 2023 as the core infrastructure of the Single Digital Gateway, enables public authorities in different Member States to exchange verified documents and evidence directly at the citizen’s request, eliminating the need for repeated submissions. OOTS is a significant achievement: it creates a trusted, cross-border data exchange layer for public administrations. But it operates at the factual layer, not the normative layer. It enables the exchange of evidence (a birth certificate, a residence proof, a qualification document) between authorities. It does not enable the exchange or comparison of the normative specifications that those authorities apply to that evidence. Authority A in Member State X can receive a document from Authority B in Member State Y through OOTS. What Authority A cannot do is query Authority B’s normative specification to understand how Member State Y’s law evaluates the same facts that Member State X’s law evaluates. The OLRF’s federated Registry adds the missing normative layer. Where OOTS enables cross-border exchange of facts, the federated Registry enables cross-border visibility of norms. A certified Personal Agent navigating a cross-border relocation can query both jurisdictions’ Registries, retrieve both Decision Trees, compare the Coverage Maps, and produce a composite audit trail that documents which norms in each jurisdiction applied to the citizen’s situation, which facts were decisive in each evaluation, and where the two jurisdictions’ treatments diverge. The OOTS and the OLRF are therefore complementary: OOTS provides the factual plumbing (evidence exchange), the OLRF provides the normative transparency (specification visibility). Together, they would enable the kind of structured, transparent, verifiable cross-border administrative transition that the scenario describes9.

The End of Compliance as Overhead

A European fintech start-up wants to operate in six EU member states. Instead of commissioning six independent legal analyses and six separate compliance implementations, its agent queries the federated Registry: six Decision Trees, six Coverage Maps, one structured compliance check. The result: six signed Decision Packages documenting which requirements are satisfied and which are not. The process takes hours, not months. The cost falls by an order of magnitude. The regulatory certainty increases, because the answer is not the opinion of a law firm but the result of an evaluation against the authoritatively published Decision Tree. Regulatory compliance ceases to be a barrier to market entry and becomes a public utility: as accessible, as reliable, and as cheap as checking whether a domain name is available10.

Civil society as Code Reviewer

A social welfare NGO inspects the Decision Tree for a new benefit reform during the pre-publication review period. It identifies that an exception protecting single parents has been classified as “Excluded” in the Coverage Map, even though the statutory text explicitly provides for it. The NGO files a formal objection that names the specific node, the specific anchor, and the specific omission. The responsible authority must respond. The response is recorded in the Registry audit trail. The NGO has not “criticised the system.” It has identified a specific error in a specific element of a specific normative specification and compelled a documented correction. Democratic scrutiny of automated governance becomes precise, technical, and consequential, rather than vague, political, and easily deflected.

The State Examination for Machines

An LLM-based Legal Agent applies for certification under the OLRF’s agent certification system for Model B subsumtion in German income tax law. It processes the Decision Tree’s complete test suite: ordinary cases, edge cases, exception paths, escalation scenarios, adversarial inputs, consistency tests across case populations. It passes. Its certification is published in the Registry: domain-specific, model-specific, version-bound. From this moment, it can be deployed by any German tax authority that has published a Class B Decision Tree for income tax. It is verified at Control 5 of the interface layer every time it initiates an interaction. Its certification can be revoked if its performance degrades. It has passed the Staatsexamen for machines11.

These scenarios are not independent of each other. They are different facets of the same architectural transformation. When law exists as executable infrastructure, the state’s relationship with its citizens changes qualitatively, not merely quantitatively. The citizen becomes better informed, because the normative basis of every decision is publicly accessible. The legislature becomes more effective, because the consequences of its enactments are visible in real time. The courts become more powerful, because the evidentiary record of every automated determination is complete, structured, and signed. Civil society becomes more precise, because the object of its scrutiny is a formal specification, not an opaque system. And the agents that increasingly mediate the citizen’s encounter with the state are qualified, certified, monitored, and subject to revocation, rather than unexamined tools operating without constitutional status.

That is the state that could exist. The rest of this paper describes how to build it.

Regulation Is Not the Problem. Opacity Is.

Rules are not friction. They are memory: the institutional record of what societies learned, at great cost, when power went unchecked. Child labour was once efficient. Untreated sewage was once cheaper than sanitation. Leaded petrol was once good for engine performance. The regulations that ended these practices were not obstacles to progress. They were its precondition. European regulation, at its best, is the accumulated wisdom of a continent that has twice in living memory experienced what happens when power operates without accountability. That wisdom is not a competitive disadvantage. It is a civilisational inheritance.

The problem is not that Europe regulates. The problem is that European regulation, as it currently operates in the relationship between enacted law and digital systems, imposes costs that bear no necessary relationship to the regulation’s substantive content. Every company operating across European jurisdictions today must interpret, model, and implement the same legal norms independently, redundantly, and at its own expense, in its own software, in its own format, without access to an authoritative specification of what the law actually requires in machine-executable form. A mid-sized financial services firm operating in six EU member states maintains six separate compliance implementations of substantially overlapping regulatory requirements. A logistics company processing cross-border shipments runs seven different interpretations of the same customs and safety norms. A digital health start-up entering the EU market must independently model GDPR, the Medical Device Regulation, and applicable national health law, three times over, because the authoritative machine-executable form of none of them exists.

The redundancy is not legally required. It is architecturally imposed, by the absence of the infrastructure this paper proposes12.

This is the innovation tax that Law as Code eliminates. Not the regulatory obligation itself (that remains exactly as democratically enacted), but the frictional cost of re-interpreting, re-implementing, and re-certifying compliance with that obligation independently across every system, every jurisdiction, and every update cycle. When the machine-executable form of a legal norm is published once, authoritatively, by the responsible authority, in a format that any conformant system can query and apply, compliance becomes a public utility rather than overhead. The regulatory obligation is unchanged. The cost of satisfying it falls by an order of magnitude.

The agent revolution deepens this argument. When certified agents operate on authoritatively published Decision Trees, the question “which law is the agent applying?” has a verifiable answer. When uncertified agents operate on privately implemented interpretations of the law, the same question has no answer at all, or worse, it has an answer that no one can check. In a world of agentic governance, the absence of Law as Code is not merely an efficiency problem. It is a sovereignty problem: the state has lost control over the normative basis on which its agents act. The OLRF restores that control13.

This is the competitive inversion that the OLRF makes possible, and it is the argument that transforms the European innovation and digital sovereignty debate from a defensive posture into a strategic opportunity. Europe’s regulatory density, which is routinely characterised as a burden that drives innovation to more permissive jurisdictions, is in fact an asset whose value has been systematically destroyed by the absence of the infrastructure to deliver it efficiently. A continent with clear, comprehensive, democratically legitimate rules, rules that are precise enough to be machine-executable, authoritative enough to be legally relied upon, and accessible enough to be queried in real time by any conformant system, is not a harder place to do business than a continent with no rules. It is an easier one. It is a place where legal certainty is not a luxury that only well-resourced companies can afford, but a public utility available to every start-up, every SME, and every foreign investor that needs to understand what the law requires before committing capital.

The United States has Silicon Valley. China has state-directed industrial innovation policy. Europe has the rule of law, the most sophisticated, most democratically legitimate, and most internationally respected regulatory framework in the world. The question is whether that framework will remain a paper asset, available only to those who can afford the lawyers and engineers to interpret it, or whether it will become digital infrastructure, available to anyone with an API call. The OLRF is the answer to that question. It does not weaken European regulation. It fulfils it, by making legal certainty as accessible, as reliable, and as sovereign as the electricity grid.

Government as an Operating System --- The Metaphor and Its Limits

The image of government as an operating system is not new. It has been used loosely, variously, sometimes carelessly, to gesture at the state’s role as the foundational layer of social organisation: the infrastructure on which everything else runs.

We use it here in a more precise, and more demanding, sense.

An operating system does not merely provide services. It enforces rules. It manages resources. It arbitrates between competing processes. It provides security and guarantees that every programme running on it operates within defined boundaries, and it does so in a way that is consistent, predictable, and auditable. When a process violates the rules, the operating system intervenes. No process gets to rewrite the kernel.

This is the vision: law as the kernel of democratic society. Not a document to be consulted, but an executable layer to be applied. Rules that are not merely written but that run. Norms that are not merely published but that work. A legal order that governs machines the way it governs people: with binding force, with traceability, and with the possibility of appeal.

Under the three-model framework that this paper develops, the operating system metaphor acquires a more specific meaning. Model A is the kernel in its strictest sense: deterministic evaluation, identical treatment, no variance. The tree decides. Model B is the kernel with a managed exception handler: a certified agent proposes a determination, and the kernel validates it against the specification, classifying deviations and escalating where thresholds are exceeded. Model C is the kernel with a qualified autonomous process: a certified agent reasons independently, and the kernel provides the retrospective audit against which the reasoning is assessed. In each case, the kernel (the Decision Tree, the Registry, the evaluation engine) remains sovereign. It is the fixed point around which increasing degrees of AI autonomy are organised, bounded, and held accountable14.

But we must be precise about what this metaphor does not mean. An operating system is designed by engineers, optimised for performance, and updated by its manufacturer. Democratic law is none of these things. It is enacted by elected representatives, interpreted by independent judges, contested by citizens, and changed through deliberation. The operating system metaphor captures the structural role of law. It must never be mistaken for a mandate to reduce law to code, to replace judicial interpretation with algorithmic execution, or to allow the engineering community to set the normative agenda.

The machine must serve the law. The law must not become a machine.

This distinction, precise, principled, and non-negotiable, is the architectural foundation of everything that follows.

It is worth stating explicitly what this vision does not pursue. The OLRF does not seek to build a universal formal model of law, nor does it aspire to the utopia of government by machines that has driven much of the AI and Law research community for decades. The Coverage Map is the architectural acknowledgment that not everything can be formalised: it documents, publicly and precisely, where formalisation ends and where human judgment must remain. The Discretion Points are the architectural guarantee that the system stops where the law requires judgment rather than computation. And the three models are the honest recognition that AI will participate in normative reasoning, and that the constitutional order must govern that participation rather than pretending it does not exist.15

What This Paper Proposes

The Open Legal Reasoning Framework (OLRF), developed through SPRIND’s Law as Code initiative, is an open, technology-neutral architecture that takes this distinction seriously and builds from it.

The OLRF does not replace law. It does not replace judges. It does not replace legislators or the democratic deliberation through which law acquires its legitimacy. What it does is close the gap: between the law as enacted and the law as executed, between the norm as written and the decision as made, between the intent of the legislator and the output of the system.

It does this by establishing legal certainty as a form of public digital infrastructure: as fundamental, as shared, and as sovereign as roads, electrical grids, or the internet itself. It proposes a world in which every piece of legislation can optionally be accompanied by a machine-executable representation of its logic, a Decision Tree, that is published by the state, signed by the responsible authority, versioned like software, and verifiable by anyone: by the AI agents that query it, by the courts that review it, by the civil society organisations that scrutinise it, and by the citizens it affects.

The architecture operates through three models that describe the Decision Tree’s variable function as AI participation in normative reasoning increases. Under Model A, the tree performs the complete evaluation: deterministic, reproducible, identical for every citizen. Under Model B, a certified Legal Agent performs subsumtion, and the tree validates the agent’s reasoning against the published specification. Under Model C, a certified Legal Agent reasons autonomously from statutory text, and the tree provides the baseline for retrospective audit. The constitutional requirements (equal treatment, reason-giving, promulgation, proportionality, judicial reviewability) are constant across all three models. The mechanisms through which they are satisfied differ.

The architecture includes a formal agent certification system: a domain-specific, model-specific, test-based qualification process for AI agents that exercise normative functions. The certification is the state’s instrument for ensuring that the machine actors it permits to apply its law meet the standards that the constitutional order demands. It is the Staatsexamen for machines.

The architecture connects the existing ecosystem of Law-as-Code projects and agentic administration initiatives through three docking patterns (Connector, Validator, Certifier) that allow diverse systems to publish, validate, and certify against a shared normative infrastructure without abandoning their existing implementations.

The architecture is submitted for international standardisation through a layered governance model that assigns each component to the organisation best suited to its domain: ISO/IEC JTC 1 for the normative core, IETF for the protocol layer, OpenID Foundation for the credential layer, W3C and OASIS for the semantic layer, Linux Foundation Europe for the implementation layer, and domain-specific working groups for the vertical standards.

It is not a utopia. It is a specification.

Who This Paper Is For

This paper is addressed to five overlapping audiences, and it asks something different of each.

For politicians and senior officials, it asks for a strategic commitment: to treat the publication of executable law as a constitutional responsibility, not a technical afterthought, and to begin, now, with a small number of high-volume procedures where the stakes are high and the benefits are immediate.

For lawyers and judges, it asks for engagement with a question that will soon be unavoidable: what does the rule of law require when the decision-maker is an AI agent? And it offers a framework in which that question already has a legally coherent answer.

For technology leaders and engineers in government and industry, it offers something rarer than another standard: an architecture designed from constitutional principles outward, not from implementation convenience inward, one in which the hard questions of accountability, transparency, and democratic legitimacy are not deferred to policy but built into the technical design.

For business leaders and compliance officers, it signals a shift as significant as the introduction of double-entry bookkeeping: regulatory compliance that is not a cost centre but a verifiable, auditable, machine-readable output. Compliance not as overhead, but as infrastructure.

For the Law-as-Code community and the developers of agentic administration systems, it offers a constitutional layer that makes existing work publicly accountable and cross-jurisdictionally interoperable. The OLRF does not compete with existing projects. It connects them to a shared normative infrastructure that no individual project can provide for itself.

The Stakes

We are not building this because it is technically elegant, though it is. We are building it because the alternative, a world in which AI agents make consequential decisions about people’s lives on the basis of privately implemented, opaquely encoded, unverifiable interpretations of law, is a world in which democratic governance has been quietly outsourced to systems that no one elected, no one can audit, and no one can hold to account.

The agent revolution makes this urgency concrete. Within the next decade, the majority of first-instance administrative decisions in OECD countries will be substantially prepared, and in many cases effectively determined, by AI agents. The question is not whether this will happen. It is whether the agents that apply the law will be qualified to do so, whether the law they apply will be the law that was enacted, whether their reasoning will be visible and reviewable, and whether the citizens they affect will retain the constitutional protections that democratic government promises.

The choice before us is not between automation and non-automation. That choice has already been made, by the weight of necessity and the pace of technological change. The choice is between automated governance that is accountable to the law and automated governance that is merely adjacent to it. Between agents that are qualified, certified, and subject to democratic oversight, and agents that operate without constitutional status in the name of a state that has not examined their competence. Between a legal infrastructure that is sovereign, open, and democratically governed, and one that is proprietary, opaque, and commercially controlled.

The OLRF is our answer to that choice. This paper is the case for making it.

Footnotes

  1. The structural gap between enacted law and its digital implementation is documented across jurisdictions. See: Loukis, E. et al., “Transforming Government through Rule Implementation in E-Government Systems”, Government Information Quarterly, Vol. 39, No. 2, 2022; Coglianese, C. and Lehr, D., “Regulating by Robot: Administrative Decision Making in the Machine-Learning Era”, Georgetown Law Journal, Vol. 105, 2017, pp. 1147 ff.; Hildebrandt, M., Law for Computer Scientists and Other Folk, Oxford University Press 2020, pp. 258 ff. (on the gap between legal text and computational implementation as a systematic source of rule-of-law failure).

  2. The distinction between AI systems that process within defined workflows and AI agents that pursue goals autonomously across extended timeframes is foundational to the constitutional analysis of this paper. See: Shavit, Y. et al., “Practices for Governing Agentic AI Systems”, OpenAI Research Paper, 2023; Chan, A. et al., “Visibility into AI Agents”, Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2024; for the multi-agent coordination dimension: Google, “Agent-to-Agent (A2A) Protocol Specification”, 2025.

  3. The institutional drift from AI “assistance” to de facto delegation of normative authority is analysed in: Parasuraman, R. and Riley, V., “Humans and Automation: Use, Misuse, Disuse, Abuse”, Human Factors, Vol. 39, No. 2, 1997, pp. 230 ff.; Skitka, L. J. et al., “Does Automation Bias Decision-Making?”, International Journal of Human-Computer Studies, Vol. 51, No. 5, 1999, pp. 991 ff.; for the legal dimension: Citron, D. K., “Technological Due Process”, Washington University Law Review, Vol. 85, 2008, pp. 1249 ff.

  4. The vision of law that updates its own operational form upon amendment presupposes the Registry’s temporal addressability and version control (Chapter 7). The technical feasibility of near-real-time propagation has been demonstrated in the domain of financial regulation by the Financial Conduct Authority’s Digital Regulatory Reporting initiative (FCA, “Call for Input: Using Technology to Achieve Smarter Regulatory Reporting”, 2018) and in the domain of tax law by the Dutch Belastingdienst’s ALEF pipeline (Corsius, M. et al., “RegelSpraak: a CNL for Executable Tax Rules Specification”, CNL 2020/21).

  5. The concept of the citizen as principal rather than petitioner draws on: Breidenbach, S., Was Gesetze sein könnten: Mit Methode zum guten Gesetz, C. H. Beck 2025, Kap. 6 (the citizen’s right to an intelligible and navigable legal order); for the scale of unclaimed entitlements: Currie, J., “The Take-Up of Social Benefits”, in Auerbach, A. and Card, D. (eds.), Public Policy and the Income Distribution, Russell Sage Foundation 2006, pp. 80 ff. (documenting that up to 40% of eligible households fail to claim benefits they are entitled to, primarily due to information asymmetry and procedural complexity).

  6. The legislative simulation capability builds on: PolicyEngine, https://policyengine.org/ (population-level microsimulation of tax and benefit reforms); Assemblée Nationale / beta.gouv.fr, “LexImpact”, 2019 (OpenFisca-based legislative impact simulation during the French budget debate). The OLRF extends this capability from parameter changes to structural changes in the Decision Tree itself, and from ex ante simulation to ex post monitoring through aggregated Decision Package analysis.

  7. MinBZK, RegelRecht, https://minbzk.github.io/regelrecht/ and https://docs.regelrecht.rijks.app/; GitHub repositories: MinBZK/poc-machine-law, MinBZK/regelrecht-mvp; Digitale Overheid, “Innovatiebudget toegekend aan 15 projecten”, July 2025; Digitale Overheid, “Demodag 2026: werken aan meer hergebruik”, February 2026; regels.overheid.nl (Dutch government rules publication platform).

  8. Wimmer, M. A. et al., “Once-Only Principle: Benefits, Barriers and Next Steps”, in Proceedings of the 13th International Conference on Electronic Government (EGOV), 2017, pp. 203 ff.

  9. Regulation (EU) 2018/1724, Arts. 6, 14 (online availability and cross-border accessibility of procedures); Commission Implementing Regulation (EU) 2022/1463 (technical specifications for OOTS); Interoperable Europe Portal, “Once-Only Technical System (OOTS)”, 2023; Regulation (EU) 2024/903 (Interoperable Europe Act), Art. 3 ff. (interoperability requirements for public sector digital services)

  10. The proposition that regulatory compliance can become a public utility rather than overhead follows the logic of: Breidenbach, S. and Glatz, F. (eds.), Rechtshandbuch Legal Tech, 2. Aufl., C. H. Beck 2021 (the industrialisation of legal certainty); for the specific cost reduction estimates: European Commission, “Fitness Check on Reporting and Monitoring of EU Legislation”, SWD(2023) 640 final (documenting the cumulative cost of redundant compliance implementations across Member States).

  11. The agent certification system is specified in Chapter 10. The Staatsexamen analogy is developed there in full: the functional equivalence between a human official who has passed the prescribed examinations (§5 DRiG) and a machine actor that has passed the prescribed test suite is the constitutional foundation of the certification requirement.

  12. The architectural imposition of redundant compliance costs is distinct from the regulatory obligation itself. The obligation (e.g. to satisfy Article 17 of the AI Act) is democratically enacted and substantively justified. The cost of implementing that obligation independently in every system, in every jurisdiction, in every update cycle, is an infrastructure failure, not a regulatory one. The OLRF eliminates the second without touching the first.

  13. The sovereignty dimension of the agentic governance problem is developed in Chapter 14. The core argument: a state that permits agents to apply its law without controlling the normative basis on which they act has ceded a constitutional capacity. See: Nemitz, P., “Constitutional Democracy and Technology in the Age of Artificial Intelligence”, Philosophical Transactions of the Royal Society A, Vol. 376, No. 2133, 2018.

  14. The three-model framework is introduced in Chapter 4 and developed throughout Parts II and III. Model A corresponds to the scope of §35a VwVfG (fully automated administrative acts for bound decisions). Model B extends beyond §35a into guided evaluation by certified agents. Model C extends to autonomous normative reasoning, the most constitutionally demanding form of machine participation in the application of law. The kernel metaphor captures the architectural constant across all three models: the Decision Tree is sovereign regardless of the degree of AI autonomy operating around it.

  15. Merigoux (2024) argues persuasively that the AI and Law field has been dominated by the twin ideologies of legal formalism (law as a purely logical system) and cybernetics (the state as a steerable system), which together produce the utopia of “government by machines.” He demonstrates that this utopian orientation has been technically counterproductive, yielding systems that “want it all” and perform poorly on everything rather than being good at one task. The OLRF takes this critique seriously. It does not attempt to formalise “the law.” It builds a universal infrastructure within which domain-specific formal models coexist, interoperate, and are independently evaluated. The distinction is architecturally enforced: each Decision Tree models a specific norm, not the legal order as a whole, and the Coverage Map documents the boundaries of each formalisation as carefully as the formalisation itself. See: Merigoux, D., “Scoping AI & Law Projects: Wanting It All is Counterproductive”, Journal of Cross-disciplinary Research in Computational Law (CRCL), Vol. 2, Issue 2, 2024.