Chapter 2
Government as an Operating System — The Vision
Last updated: 2026-04-10 Open for review
“The art of government is the organisation of idolatry.”
George Bernard Shaw, Maxims for Revolutionists, 1903
What the Metaphor Means --- and What It Doesn’t
Every powerful metaphor is also a dangerous one. The image of government as an operating system has circulated in technology policy circles for long enough to have accumulated both genuine insight and considerable confusion. We therefore should be precise about what we mean and what we do not mean, before building an architecture on the image1 2.
An operating system, in the technical sense, is the foundational software layer that manages the resources of a computing environment, enforces the rules under which processes may operate, arbitrates conflicts between competing demands, and guarantees that no individual process can compromise the integrity of the system as a whole. It does not perform the tasks that run on top of it. It does not write the applications. It does not choose the purposes to which its resources are put. What it does is provide a stable, predictable, trustworthy environment in which those tasks can be performed, and ensure that every process that runs on it does so within defined and enforceable boundaries.
Used in this precise sense, the metaphor illuminates something important about the role of law in a democratic society. Law, like an operating system, is not the activity of governance - it is the condition of its possibility3. It does not build the hospital or pay the benefit or issue the permit. It defines the rules under which those things may be done, by whom, on what basis, subject to what constraints, and with what recourse when things go wrong. It is the layer beneath all other layers: the infrastructure of the infrastructure.
The metaphor becomes dangerous the moment it is extended beyond this structural analogy. An operating system is designed by engineers. Democratic law is not designed, it is enacted, through deliberation, contestation, compromise, and the exercise of political will by representatives accountable to the people they govern. An operating system is updated by its manufacturer, typically without the users’ involvement. Democratic law is changed through parliamentary process, with debate, amendment, and public scrutiny. An operating system is optimised for performance. Democratic law is optimised - insofar as it is optimised for anything - for justice, for the protection of rights, for the reconciliation of competing interests, for values that cannot be expressed as performance metrics.
Most importantly: an operating system is neutral. Democratic law is not, and should not be. It embeds choices about who deserves protection, what counts as harm, how competing claims are to be weighed, that are irreducibly political and must remain so. No technical system can make those choices. No technical system should try.
What the operating system metaphor does clarify is that the legal tradition, shaped around human reading and interpretation, was never built to govern a world in which machine actors participate in the exercise of public power. In such a world, law must serve not only as a set of arguments to be read and debated, but also as an infrastructure that is executable, a governing layer that can be run. The question is how to make it executable without losing the qualities (deliberation, contestation, openness to interpretation) that make it law rather than code.
Legal Certainty as a Utility: Law as Public Digital Infrastructure
In economics and public policy, few concepts have shown the same resilience across ideological traditions as that of the public good. It describes resources that are non excludable and non rivalrous, and whose provision cannot be entrusted to private markets alone without generating chronic underinvestment at the expense of society as a whole. Roads, clean water, the electrical grid, and the internet backbone are among the classic examples. They are not simply useful services. They constitute the foundational infrastructure of modern society and are therefore widely understood as matters of public responsibility rather than mere commercial provision4.
Machine-executable legal certainty is a public good of exactly this character. Its absence represents one of the most significant and least-acknowledged infrastructure deficits of the digital age5.
Consider what legal certainty makes possible in a functioning democratic society. It gives people confidence that the rules governing conduct are knowable in advance, applied with sufficient consistency, and stable enough to guide real decisions. Citizens rely on it when planning their lives, businesses when organising their affairs, and public officials when exercising authority. Legal certainty is not a luxury, nor merely a refinement of mature legal systems. It is a basic condition of rational agency in an ordered social world. Where it is absent, planning gives way to guesswork, trust becomes fragile, and the state’s claim to legitimate authority begins to weaken.
In the analogue world, legal certainty was secured, if always imperfectly, through familiar institutional means: the publication of legal texts, the relative consistency of judicial interpretation, and the professional formation of lawyers within a shared interpretive tradition. It was, in this sense, a distinctly human accomplishment, sustained by human institutions and bounded by human limitations. Those limitations were well known: the distance between expert knowledge and ordinary understanding, the variation of interpretation across courts and jurisdictions, and the delay with which legal change was absorbed into administrative and social practice.
In the digital world, legal certainty demands something further. It is no longer enough that rules be intelligible to human readers. They must also be capable of authoritative execution by machines. A system charged with applying a legal norm cannot derive certainty from a published text alone, however carefully drafted. What it requires is a formal specification of the legal logic contained in that text, structured in a way that is versioned, verifiable, and institutionally authoritative. At present, that layer is largely missing. As a result, every digital system that applies the law is forced to construct its own implementation in isolation, without meaningful public oversight, without common verification, and without the quality assurance that essential infrastructure demands.
The result is a form of legal uncertainty that remains largely invisible in the analogue sense, yet is pervasive in the digital one. The statute may be clear, the text properly published, and the law formally in force. But if no authoritative machine executable form exists, the rule cannot reliably govern the systems that are meant to apply it. The norm exists in principle, while its operative form is absent in practice. In a world in which machines increasingly mediate decisions, a rule that cannot be executed by machines risks becoming, in functional terms, a rule without effective force.
The vision of the OLRF is to provide precisely this missing layer of infrastructure. Its purpose is to establish machine executable legal certainty as a public utility, furnished by the state, subject to democratic accountability, and accessible to every actor operating within the legal order, whether public or private, human or machine. It is not conceived as a product, nor as a service in the ordinary commercial sense. It is conceived as infrastructure: foundational, shared, and sovereign, much like the electrical grid on which modern society depends.
This reframing carries far reaching consequences. If machine executable law is understood as public infrastructure, its provision can no longer be treated as a market opportunity. It becomes a responsibility of the state. Its governance must therefore be transparent and democratically accountable rather than proprietary and opaque. Its standards must remain open rather than dependent on any single vendor platform. And its costs must be borne collectively, as the costs of essential infrastructure are generally borne, because the alternative is a landscape in which each actor constructs its own implementation, reproducing exactly the fragmentation, inconsistency, and opacity already described.
It also follows that control over this infrastructure is a matter of sovereignty. The question is no different in principle from the question of who controls a nation’s energy grid, communications network, or financial clearing system. We return to this more fully in the chapter on digital sovereignty. For present purposes, it is enough to recognise that the choice between public, open, and sovereign infrastructure on the one hand, and private, proprietary, vendor-controlled infrastructure on the other, is not merely a technical or organisational matter. It is a constitutional choice.
The New Social Contract: Verifiable Rules for Human and Machine Actors
The social contract, in the tradition of Rousseau, Locke, and Rawls, describes the foundational agreement between citizens and the political community they together constitute. It is an exchange in which certain freedoms are surrendered in return for the security, order, and predictability that organised government is meant to provide. Its legitimacy rests on consent, on reciprocity, and above all on the intelligibility of its terms. A social contract whose rules are unknown, inaccessible, or subject to arbitrary revision ceases to be a contract in any meaningful sense. It becomes an instrument of domination6.
For centuries, those terms were communicated through law. Law was published, interpreted, contested, and, at least in principle, knowable to any citizen prepared to undertake the effort of legal understanding. The asymmetry between legal professionals and ordinary citizens was always real, and often considerable. Yet it was mediated by institutions designed to preserve access to the legal order for those subject to it. Courts, ombudsmen, legal aid, and public legal information services served, however imperfectly, as bridges between the complexity of law and the citizen’s right to understand the conditions under which they were governed.
That bridge is now coming under increasing strain in the context of digital governance. When the rules that shape a citizen’s interaction with the state are embedded in algorithms whose logic is proprietary, whose relationship to enacted law cannot be demonstrated, and whose outputs cannot be explained in terms intelligible to either the citizen or their legal representative, the social contract becomes unreadable in practice. The citizen is bound by rules they cannot meaningfully know. The state applies rules it cannot adequately explain. And courts, whose role is to adjudicate the meaning and application of knowable rules, are increasingly asked to review decisions whose operative basis is opaque not only to the individual concerned, but often to the authority that produced them.
The OLRF responds to this condition by proposing a necessary extension of the social contract for an age in which machine actors participate directly in governance. The promise is straightforward: any rule applied to a citizen must remain knowable by that citizen. In the digital age, that requirement can no longer be satisfied by the publication of statutory text alone. It requires that the normative basis on which the system operates (whether a formal Decision Tree, a normative validation framework against which an AI agent’s reasoning is checked, or an audit protocol against which autonomous legal reasoning is retrospectively assessed) be publicly available, verifiable, and traceable to the democratic authority from which it originates.
This is not a radical proposition. It is, on the contrary, the most conservative possible response to a profoundly new situation. It insists that the oldest commitments of the rule of law, namely that rules be knowable in advance, applied consistently, and open to challenge, must be preserved when law moves from a human interpretive environment into a machine executable one. It insists, in other words, that the social contract formed between citizens and their representatives must not quietly dissolve at the point where it is translated into software.
Yet the social contract of the digital age must be extended further. It must govern not only the relationship between citizen and state, but also the relationship between human and machine actors more generally. Autonomous software agents capable of acting in their own operational capacity to produce consequences in the world are already becoming participants in governance. They submit documents, query registers, prepare administrative outputs, and execute transactions. They are not citizens, and they do not possess rights of their own. But they must nonetheless be bound by rules, and those rules must be as authoritative, as verifiable, and as democratically accountable as the rules governing the human officials whose functions they increasingly support or replace.
The architecture described in this paper should be understood in precisely these terms. It is not merely a technical stack. It is the institutional form of an extended social contract: a framework in which every actor participating in governance, whether human or machine, operates on the basis of the same legal norms, derived from the same democratic source, and subject to the same requirements of traceability, reviewability, and contestability. The specific form of that framework (whether it operates through deterministic evaluation, guided agent reasoning, or retrospective audit) may vary according to the nature of the decision. The constitutional commitments it serves do not vary. They are constant across every model, every implementation, and every jurisdiction.
The machine does not enter into the social contract as a citizen. But it must remain bound by the terms under which that contract is given effect.
Preserving Democratic Values in a Machine-Readable World
The central question is not whether government can become faster, cheaper, or more consistent through software. All of that matters, but none of it reaches the core of the issue. The real question is whether democratic government can remain true to itself once the exercise of public power is increasingly mediated by technical systems.
What is at stake is not efficiency, but legitimacy. Democratic institutions do not command authority simply because they produce decisions. They command authority because they do so under conditions that citizens can recognise as lawful, intelligible, and fair. The promise of constitutional government has never been that power will always be wise or error-free. It has been that power will be exercised within a framework of known rules, public reasons, and institutional restraint.
That promise becomes fragile when government begins to operate through systems whose inner logic is difficult to see, difficult to test, and difficult to challenge. A citizen may still receive a decision. A procedure may still appear to have been followed. The law may still stand on the books exactly as before. And yet something essential may have changed: the individual is no longer confronting an authority that can explain itself in recognisably legal terms, but a process whose practical logic is hidden behind software, implementation choices, and technical abstraction. At that point, opacity is no longer a technical inconvenience. It becomes a constitutional problem.
The values at risk are neither abstract nor peripheral. They are the working commitments of a democratic order: that each person stands before the law with equal dignity; that a person affected by state power has a meaningful opportunity to be heard; that public authority is exercised in good faith and for public purposes; that decisions can be explained in terms the affected person can understand and contest. These are not decorative ideals added after the fact. They are the conditions under which public power becomes legitimate at all.
This is why the growing role of software in government, and particularly the growing role of AI, must be approached with both ambition and caution. Technical systems are naturally drawn toward standardisation, speed, and scale. Constitutional government, by contrast, often depends on judgment, context, and explanation. A machine can process cases in enormous numbers, but law is not exhausted by the sorting of cases into categories. It also asks whether relevant differences have been seen, whether discretion has been exercised where discretion is required, and whether a person can still understand why power was used in one way rather than another.
The risk is not only that automated administration may make mistakes. Any form of administration produces wrong results. The deeper risk is that public institutions may slowly adapt themselves to the logic of the machine rather than requiring the machine to operate within the boundaries of constitutional government. What is easy to compute may begin to displace what is important to justify. What scales well may begin to crowd out what must remain open to interpretation, exception, and review. In that shift, a state may become more efficient while becoming less accountable, and more powerful while becoming less trustworthy.7 8
The OLRF is proposed as a framework designed to prevent precisely this shift. It does so not by limiting the role of AI in governance (which would be neither realistic nor desirable) but by ensuring that every form of AI participation in governance, from deterministic evaluation to autonomous legal reasoning, operates within an infrastructure that preserves the constitutional commitments on which legitimate government depends. The specific mechanisms through which this is achieved will vary across the three models described in this paper. The principle they serve is the same: wherever rules are applied through machines, the logic of their application must remain traceable, verifiable, and answerable to the same democratic authority from which the law itself derives.
The point is not to make government frictionless. The point is to ensure that technical mediation does not dissolve the principles on which legitimate government depends. Properly understood, this is not a technocratic vision at all. It is a constitutional one. It asks whether the future state will merely be more automated, or whether it will also remain legible to the citizens in whose name it acts. It asks whether public power, once translated into systems, will still be capable of explanation, challenge, and restraint. And it proposes that these qualities must not be optional additions but the very conditions under which democratic government continues to deserve its name.
Footnotes
-
Mahendran, T. (2025). Government-as-a-Platform als Leitbild des Deutschland-Stacks. Technik allein reicht nicht: Ein Governance-Rahmen für plattformbasierte Verwaltungsmodernisierung. Agora Digitale Transformation, Berlin, 7. Juli 2025.
URL: https://agoradigital.de/wp-content/uploads/2025/07/25-07-07_ADT_Studie_GaaP-Deutschland-Stack.pdf ↩
-
Tim O’Reilly, “Government as a Platform”, in: D. Lathrop & L. Ruma (Hrsg.), Open Government, O’Reilly Media 2010, S. 11—40. ↩
-
H. L. A. Hart, The Concept of Law, Oxford University Press 1961, Kap. 5—6 (Primär- und Sekundärregeln; Recht als Ermächtigungssystem) ↩
-
Sébastien Krier & Gry Hasselbalch, “Data as Infrastructure”, AlgorithmWatch 2021 ↩
-
OECD, “Cracking the Code: Rulemaking for Humans and Machines”, OECD Digital Government Studies, Paris 2020 ↩
-
Lon L. Fuller, The Morality of Law, Yale University Press 1964, S. 33—94 ↩
-
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, St. Martin’s Press 2018 ↩
-
Council of Europe, “Algorithms and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing Techniques”, DGI(2018)05 ↩