Chapter 1
The Governance Crisis in the Age of AI
Last updated: 2026-04-10 Open for review
“The most dangerous moment for a bad government is when it begins to reform.”
Alexis de Tocqueville
The Convergence: AI, Autonomous Agents, and Cyber-Physical Systems in Public Life
History rarely announces its turning points. The steam engine did not arrive with a declaration. The printing press did not come with a manifesto. And the transformation now underway in the relationship between citizens and the institutions that govern them is arriving, as most profound changes do, not as a single event but as an accumulation of unremarkable ones.
A social worker in Hamburg queries a decision-support system that scores benefit applications by risk of non-compliance. A building authority in Lyon processes permit applications through an automated workflow that cross-checks zoning rules, environmental constraints, and ownership registers without a human reviewing each individual file. A tax administration in Warsaw deploys machine learning models to select returns for audit. A customs authority at Rotterdam coordinates with freight management agents that pre-clear goods against regulatory databases before the containers leave the port. In each of these cases, a consequential, legally significant, life-affecting decision is being made, shaped, or substantially pre-determined by a system that is not a person and is not, in any meaningful sense, reading the law1.
This convergence of AI, autonomous software agents, and cyber-physical systems in the administration of public life is not a future scenario. It is the present condition of governance in every advanced democracy. And it is accelerating.
By 20302, the majority of first-instance administrative decisions in OECD countries - routine benefit determinations, licence approvals, tax assessments, permit grants, compliance certifications --- will be substantially automated. Not because governments have decided this as a matter of policy, though some have. But because the volume of decisions that modern administrative states are required to produce has long since outgrown the capacity of human bureaucracies to produce them at acceptable cost, speed, and consistency. A German social insurance agency processes more than fifty million individual determinations each year. A French tax authority handles more than forty million returns. A pan-European customs system processes hundreds of millions of declarations. The numbers are not compatible with a model in which a trained human official reads each file, consults the relevant legislation, applies judgment, and issues a reasoned decision.
Automation, in this sense, is not a choice. It is a response to arithmetic. The question that matters is what kind of automation.
This paper proposes that there are fundamentally different ways in which AI systems can participate in legal governance, ranging from strictly constrained fact-finding to autonomous legal reasoning. Each carries different constitutional implications, different risks, and different requirements for the institutional infrastructure that surrounds it. The choice between them is not primarily a technical decision. It is a constitutional one. And most governments are currently making it by default, without recognising that a choice is being made.
The Structural Problem: Law as Static Documents in a Dynamic World
To understand why this matters, it is necessary to understand something about the nature of law as it currently exists in its relationship to digital systems, and why that relationship is fundamentally broken.
Law, in every jurisdiction on earth, is published as text. Statutes, regulations, administrative guidelines, court decisions: these are documents, in the oldest sense of the word. They are written by human beings, for human beings, to be read, interpreted, and applied by human beings trained in the arts of legal reasoning. This is not an accident or a historical oversight. It is the product of a tradition of governance stretching back to Justinian, Magna Carta, and the revolutionary constitutions of the eighteenth century, a tradition that understands law as an act of language, of deliberation, of meaning-making within a political community.
That tradition has served democratic societies extraordinarily well. It has also produced a structural mismatch with the requirements of digital governance that grows more severe with every passing year.
When a software system is required to apply a legal norm (to determine whether a citizen qualifies for a benefit, whether a structure complies with a building code, whether a transaction is subject to a regulatory obligation) it cannot read the statute the way a lawyer reads it. Or rather: until recently, it could not. The emergence of large language models capable of sophisticated legal reasoning has begun to change this picture, and we will soon see significant improvements. But the question is not whether an AI system can read and reason about legal text. It is whether a democratic legal order can accept the results of that reasoning as the basis for binding decisions about the lives of its citizens. That question has a constitutional answer, and the answer is more nuanced than either a total prohibition or a global permission.
Historically, the translation from legal text to executable logic has been performed by software engineers working from their reading of the law, or from their reading of a lawyer’s summary of the law, or from their reading of a policy analyst’s summary of a lawyer’s summary. The result is a private interpretation, neither authoritative nor verifiable, encoded in systems that may remain in production for a decade or more, applied to millions of decisions, never formally reviewed against the statutory text they claim to implement, and invisible to anyone outside the development team that built them3.
This is not a criticism of the engineers or the lawyers or the officials involved. It is a description of a structural gap that no individual actor can resolve: the gap between law as a text written for human interpretation and law as logic required for machine execution. The gap exists because no institutional mechanism has ever been created to bridge it. Legislatures publish statutes. They do not publish algorithms. And in the absence of an authoritative, verifiable, machine-executable representation of legal logic, every system that applies law digitally must build its own, alone, without oversight, and without any way for the outside world to verify whether the result is correct.
Fragmentation, Inconsistency, Opacity - The Cost of the Status Quo
The consequences of this structural gap are neither abstract nor rare. They are the daily operating environment of digital governance, and they manifest in four distinct and compounding pathologies.
Fragmentation is the first and most pervasive. The same statute, the same words enacted by the same legislature, is independently re-implemented by every entity required to apply it.⁴ A federal social insurance law in Germany is encoded separately by the Federal Employment Agency, by each of the sixteen state social welfare authorities, by dozens of municipal authorities, by private software vendors supplying case management systems to all of the above, and by the companies providing compliance tools to employers. Each implementation reflects the implementer’s interpretation, the implementer’s data model, and the implementer’s understanding of edge cases. The law is the same. The implementations are not. Citizens in identical situations receive different outcomes depending on which system processed their case.
Inconsistency follows from fragmentation and compounds it. Inconsistency in first-instance decisions is not merely an administrative inconvenience. It is a violation of the most fundamental principle of the rule of law: that like cases be treated alike. When the same legal norm produces systematically different outcomes in different systems, and when those differences are invisible because the systems are opaque, the rule of law has been formally preserved and substantively undermined. The statute was followed. The principle it embodies was not.
Opacity is the third pathology, and in many ways the most corrosive. A citizen who receives an adverse automated decision (a benefit denied, a permit rejected, a tax assessment disputed) is entitled, in principle, to understand the basis of that decision and to challenge it. In practice, the basis is often a proprietary algorithm that the authority itself cannot fully explain, applied to a data set the citizen cannot inspect, producing a result whose relationship to the relevant legal provision cannot be demonstrated because it was never formally established. The right of appeal exists. The ability to exercise it meaningfully does not.
Escalating costs are the fourth pathology. Every re-implementation of legal logic represents cost: development cost, maintenance cost, the cost of synchronising updates when the law changes, the cost of errors discovered in production, the cost of litigation arising from those errors. Across a modern administrative state, these costs are measured in billions of euros annually, spent not on governance, but on the perpetual re-creation of logic that, in a better-designed system, would be created once, verified once, and shared by all who need it. The money wasted is significant. But it is a symptom, not the disease. The disease is the structural absence of law as shared, verifiable, executable infrastructure.
The Democratic Paradox: Automated Decisions Without a Normative Foundation
These pathologies converge in a paradox that sits at the heart of this paper and at the heart of the governance crisis of our era.
Democratic legitimacy flows from a chain: the people elect a legislature, the legislature enacts law, the law is applied to govern conduct. Each link in the chain is essential. Break any link, and the legitimacy of the outcome is severed. For centuries, the critical question of democratic governance was whether the legislature faithfully represented the people’s will and whether the executive faithfully applied the legislature’s law. Courts, constitutions, and accountability mechanisms were built around those questions.
The digital age has introduced a new and underappreciated way in which the chain can be broken: not between the people and the legislature, and not between the legislature and the executive, but between the law and the system that executes it. When a software system applies its developer’s interpretation of a statute rather than the statute itself, when there is no verifiable, authoritative connection between the norm as enacted and the logic as executed, the democratic chain has been severed at its final link. Decisions are made in the name of the law, by systems that cannot demonstrate their relationship to it.
This is not a hypothetical threat. It is the operational reality of digital government today. And it will become dramatically more acute as AI systems and autonomous agents assume an increasingly central role in administrative processes.
The Temptation of the Capable Machine
The paradox deepens when we consider what AI systems are now capable of. The strongest current AI systems demonstrate extraordinary ability in tasks involving language, pattern recognition, knowledge synthesis, and reasoning under uncertainty. They can read legal text. They can identify applicable provisions. They can construct arguments that are difficult to distinguish from those of trained lawyers. The temptation is obvious: if AI can reason about law as well as a human legal professional, why not let it?
This paper will try to answer the question, wether at some point in future time AI would prove its capability to fully replace humans in decision making and be allowed to do so, beyond a simple yes or no, but a structured framework of conditions under which different forms of AI participation in legal governance can be constitutionally acceptable, and conditions under which they cannot.
A large language model that reasons about legal questions on the basis of its training data may produce correct conclusions most of the time. But “most of the time” is not a constitutional standard in most parts of the world, and “probably correct” is not a legal basis for a decision that changes a life. The question is not whether the machine is capable. It is whether its capability can be deployed within a framework that preserves the constitutional requirements of equal treatment, accountability, transparency, and democratic legitimacy. Those requirements are not negotiable. They are the conditions under which any exercise of public power, whether by a human or a machine or a combination of both, must operate.
The strongest current AI systems are potentially transformative tools for the investigative, preparatory work of administration: extracting relevant information from documents, classifying evidence, identifying applicable categories, constructing a coherent account of a situation. Whether they can go further (whether they can perform the normative evaluation itself, the subsumption of established facts under applicable law to produce a binding legal outcome) depends on whether their reasoning can be made verifiable, auditable, consistent, and democratically accountable. This paper proposes that different answers to that question lead to different architectural models, each with different implications for the design of legal infrastructure.
What This Paper Proposes
This paper proposes that the governance crisis described above can be resolved, but only if democratic societies are willing to build the institutional and technical infrastructure the resolution requires. The core of that infrastructure is a framework for making law machine-executable in a way that preserves, and in important respects strengthens, the constitutional commitments that democratic governance rests upon.
The Open Legal Reasoning Framework (OLRF), developed under the SPRIND Law as Code initiative and submitted for international consultation through the OECD Digital Government programme, is a proposal for such a framework. It does not claim to be the only possible architecture. It claims to be a constitutionally grounded, technically feasible, and institutionally realistic proposal that addresses the structural gap between law as text and law as executable logic, while honestly confronting the limits of what any technical architecture can achieve.
The OLRF proposes a migration path from today´s mostly inefficient patchwork-digitalization to full AI participation in legal governance, described exemplarily in three scenarios from strict separation (in which AI systems assemble facts and a deterministic Decision Tree evaluates them) through guided evaluation (in which qualified AI agents apply law within a validating normative framework) to autonomous legal reasoning (in which AI agents apply law directly, with the normative framework serving as an audit and transparency instrument). These three models are not competing alternatives. They are points on a spectrum, and a mature implementation will use all three simultaneously, matching the model to the legal character of each decision.
What is constant across all three models is the requirement that the normative basis of every automated decision be publicly specified, verifiably anchored in the legislative text, and subject to democratic oversight, judicial review, and citizen challenge. The technical instrument through which this is achieved (a Decision Tree, a normative validation framework, a retrospective audit protocol) may vary. The constitutional commitment that the instrument serves does not.
This is not a proposal for making governance more efficient, though it does that. It is a proposal for making governance in the age of AI democratically legitimate, constitutionally sound, and institutionally sustainable. It is, in the most precise sense, a proposal for carrying the values of democratic governance into a future in which machines participate in the exercise of public power, and for ensuring that this participation occurs within, and never in the place of, the rule of law.
Footnotes
-
The examples in this paragraph are illustrative composites drawn from documented practice across European administrations. For surveys of AI use in public administration, see: OECD (2023), AI in Government; European Commission (2023), AI Watch: Government use cases. ↩
-
OECD (2023), “Government at a Glance”; McKinsey Global Institute (2023), “The economic potential of generative AI”; European Commission (2022), “Artificial Intelligence --- a European approach” ↩
-
For the foundational analysis of this structural gap, see: New Zealand Government (2018), Better Rules for Government Discovery Report; OECD (2020), Cracking the Code: Rulemaking for humans and machines. ↩