OLRF
Part 3 Constitutional and Democratic Dimensions

Chapter 13

Fundamental Rights and the Digital State

Last updated: 2026-04-10 Open for review

“Ethics cannot be a fig leaf for the systematic erosion of rights in the name of innovation.” --- Giovanni Buttarelli, European Data Protection Supervisor, Opinion on Artificial Intelligence, 2018

The Rights Framework: Charter, GDPR, and the AI Act

The constitutional case for machine-executable law does not end with the formal requirements of the rule of law. A system may be transparent, reviewable, and structurally faithful to enacted norms, and still fail if it does not preserve the rights of the people subjected to it. The deeper question, therefore, is not only whether automated governance remains lawful in a formal sense, but whether it remains compatible with the fundamental rights that define the legal order within which it operates.

In the European context, that question is shaped by three legal instruments that must be read together rather than in isolation. The Charter of Fundamental Rights supplies the constitutional horizon. The GDPR gives operational form to the protection of personal data and individual autonomy in data processing. The AI Act adds a regulatory layer for systems whose technical operation may affect rights and safety at scale. Each addresses a different dimension of the same problem. Together, they form the framework within which any architecture for automated public decision-making must be judged.1

The important point is that these instruments do not pull in different directions. Properly understood, they converge on a common demand. Public power exercised through digital systems must remain intelligible, bounded, reviewable, and accountable to the person affected. Rights must not survive merely as abstract promises while the real logic of decision-making disappears into technical systems. They must be preserved at the level where decisions are actually prepared, evaluated, issued, and challenged. That is why the OLRF should be understood not as a compliance wrapper placed around automated governance, but as an attempt to build a rights-compatible architecture from the outset2.

The three models introduced in Part I make the rights analysis more precise. Under Model A, the rights risks are concentrated in the design of the Decision Tree: an overbroad condition, a missing exception, or an inadequately modelled threshold may systematically violate rights across an entire population of cases. Under Model B, the rights risks are distributed between the tree and the Legal Agent: the agent’s subsumption may introduce bias, reduce individuals to profiles, or override contextual factors that the norm intended to preserve. Under Model C, the rights risks shift to the autonomous agent’s reasoning: a language model that reasons directly from statutory text may produce outputs that are individually plausible but systematically discriminatory, opaque, or incompatible with the constitutional demand for equal treatment. The rights framework applies with equal force to all three models. The mechanisms through which rights are protected, and the points at which they are most vulnerable, differ3.

Human Dignity and the Prohibition of Reduction

The first right at stake is also the most foundational. Human dignity requires that a person never be treated merely as an object of administration, a data point in a workflow, or a statistical profile to be sorted and processed. In the context of automated governance, this principle acquires a very specific meaning. It requires that the state preserve the distinction between a legally relevant abstraction and the person whom that abstraction represents4.

Every legal system works through categories. No norm can engage the fullness of a human life. The law necessarily abstracts. It speaks of residence, income, dependency, disability, age, prior conduct, legal status, and other categories through which consequences are assigned. That abstraction is not itself the problem. The problem arises when the technical system hardens those abstractions into a complete substitute for the person, and in doing so silently removes the very exceptions, qualifications, and atypical circumstances that the legal order itself meant to preserve.

The nature of this reduction varies across the three models. Under Model A, the reduction is structural: the person is reduced to a vector of DataPoints, and the Decision Tree produces an outcome based on that vector alone. If a legally relevant exception is not modelled, the person whose circumstances fall within that exception is treated as though the exception did not exist. Under Model B, the reduction risk is more subtle: the Legal Agent may engage with the individual’s circumstances more richly than the tree alone could, but if the validation framework is too rigid, it may force the agent’s nuanced subsumption back into the tree’s categorical structure, erasing the very contextual sensitivity that justified Model B in the first place. Under Model C, the reduction risk takes a different form entirely: a language model reasoning from statutory text may produce output that reads as individually attentive, but whose reasoning is in fact driven by statistical patterns rather than by genuine engagement with the person’s legal situation5.

This is where the OLRF’s architecture matters. A Decision Tree may formalise the deterministic part of a norm, but it must also model its exceptions, preserve its thresholds, and disclose the places where automation ends. The Coverage Map and the Discretion Point are not merely technical conveniences. They are the architectural means by which the system acknowledges that the person governed by the law cannot always be exhausted by the categories through which the law begins its work. Where the norm requires judgment, mitigation, or attention to atypical circumstance, the architecture must not simulate certainty where the law itself has left room for consideration.

For that reason, dignity in a digital state is not protected by rhetorical commitments to “human-centred AI” alone. It is protected by ensuring that the system cannot silently collapse the individual into the profile. If a legally relevant exception is omitted, that omission must be visible in the Coverage Map. If a norm calls for judgment, the system must stop and hand the matter to a human decision-maker through the Discretion Point. If an element is excluded from automation because the authority judges that deterministic processing would be too coarse, that exclusion must be publicly documented rather than buried in implementation practice. Only then is the person still encountered by the state as a subject of law rather than a residue of data processing.

Data Protection and Informational Self Determination

Article 8 of the Charter and the GDPR do more than regulate technical data handling. They express a constitutional view of the relationship between the individual and the institutions that process information about them. That view is simple but far-reaching. Personal data may not be collected, combined, retained, or repurposed merely because doing so is administratively useful. The state must be able to justify why it is processing a given category of data, why that data is necessary, whether it is accurate, how long it will be retained, and on what legal basis each step is taken6.

Automated governance intensifies these requirements because the temptation to over-collect is built into the logic of technical systems. Data is cheap to store, easy to copy, and often attractive to reuse. But the mere availability of data does not make its use lawful. The constitutional demand remains one of restraint. Only the data necessary for the legal evaluation at hand should enter the process. Only the sources that can lawfully support that evaluation should be relied upon. And only the record necessary to justify, contest, and audit the resulting determination should be retained.

The OLRF is well placed to operationalise this restraint because it does not begin with open-ended data collection. It begins with the norm. The Decision Tree defines which facts are legally relevant. The DataPoint Schema defines the structure in which those facts must appear. The evaluation path records which of them actually mattered. The architecture therefore makes it possible to move from broad and often vague data practices toward a norm-bound model of data processing in which legal relevance, purpose limitation, and auditability are structurally linked. This does not automatically solve every data protection issue, but it creates the conditions under which data minimisation can become a real property of the system rather than a general aspiration written into a policy manual.

The three models introduce different data protection profiles. Under Model A, data minimisation is enforceable with precision: the DataPoint Schema specifies exactly which facts enter the evaluation, and no other data is processed. Under Model B, the Legal Agent may require additional context for its subsumption (case law, precedent populations, contextual information about the applicant’s circumstances). This broader data access must be justified by the norm’s requirements for guided evaluation, and the Decision Package must document which additional data sources the agent accessed and why. Under Model C, the data protection risk is most acute: an autonomous agent reasoning from statutory text may draw on information that a deterministic system would never have accessed, and the retrospective audit must verify that the agent’s data access remained within the boundaries that the norm and the GDPR permit7.

That is also why privacy and explainability must not be treated as opposites. A rights-preserving system must be able to explain what it did without opening the door to uncontrolled data proliferation. The Decision Package helps here because it records the reasoning path in a structured way. It can support different levels of explanation for different legitimate recipients while remaining tied to the same underlying evaluation record. The citizen, the lawyer, the court, and the authority do not necessarily need the same presentation layer, but they must remain anchored in the same authentic and reviewable decision record.

Human Oversight, Article 22 GDPR, and the Boundary of Automation

The rights problem in automated governance is not simply that machines may be used. It is that the boundary between assistance and substitution can disappear very quickly once systems become operationally effective. What begins as support for document intake or factual preparation can become, almost by institutional drift, de facto normative decision-making. At that point the legal concern is no longer merely technical reliability. It is whether the person affected still encounters a legally accountable authority at all8.

This is why meaningful human oversight matters. Not as a ceremonial signature at the end of an otherwise opaque pipeline, and not as a theoretical possibility of intervention that is never realistically exercised, but as a real structural boundary where the law requires judgment rather than computation. In European data protection law this concern is most sharply captured by the prohibition on decisions based solely on automated processing in legally consequential settings, except under tightly bounded conditions. The deeper principle is broader than the text of any single provision. Where the legal order reserves evaluation, balancing, or discretion to a human authority, the architecture must preserve that reservation in operational form.

The three models present different constitutional profiles under Article 22 GDPR and Article 14 of the AI Act. Under Model A, the decision is fully automated in the technical sense, but the normative basis (the Decision Tree) is authored and published by a human authority, and every Discretion Point triggers a genuine human determination. Whether a Model A evaluation constitutes a decision “based solely on automated processing” within the meaning of Article 22(1) GDPR depends on whether the publication of the Decision Tree and the human authorship of the normative specification are considered sufficient human involvement, or whether Article 22 requires human intervention at the level of each individual case. The doctrinal answer is contested. The OLRF’s position is that where the norm is fully deterministic and no Discretion Point is triggered, Model A evaluation under a publicly authored and published Decision Tree satisfies the requirement, because the human judgment has been exercised at the normative level rather than the case level9.

Under Model B, the Legal Agent’s subsumption constitutes a form of AI-driven normative reasoning that is then validated by the tree. This is not “solely automated” in the conventional sense, because the validation framework checks the agent’s output against a human-authored standard. But the question shifts: is the validation sufficiently meaningful to constitute genuine human oversight, or is it merely an automated check on another automated process? The OLRF’s answer is that the validation framework must be designed to trigger human review whenever the deviation exceeds defined thresholds, and that the deviation classification system (Chapter 5) must be calibrated so that constitutionally significant deviations always escalate to a human decision-maker rather than being resolved by the validation engine alone.

Under Model C, Article 22 applies with its full force. An autonomous agent that reasons directly from statutory text and produces a legally consequential determination is, on any reasonable reading, making a decision based solely on automated processing. The OLRF’s retrospective audit does not change this characterisation, because the audit occurs after the determination has been produced. Model C therefore requires either that the determination be treated as a recommendation subject to mandatory human approval before it becomes effective, or that one of the exceptions under Article 22(2) GDPR applies (explicit consent, contractual necessity, or Union/Member State law authorising such processing with suitable safeguards). The model assignment to Model C for any norm affecting individual rights is therefore not merely a technical choice. It is a choice that must satisfy Article 22’s requirements, which in practice means that Model C determinations affecting fundamental rights must include a human approval step before they become legally effective.10

The OLRF’s answer is more exact than the generic language of “human in the loop.” It does not merely recommend human involvement somewhere in the process. It identifies where the norm itself ceases to be fully formalisable and requires escalation through a Discretion Point. The decisive issue is not whether a human appears somewhere near the system, but whether the architecture correctly distinguishes between deterministic application and lawful judgment. A human who merely approves a result they cannot meaningfully interrogate does not restore constitutional legitimacy. A human who receives the case at the point where the law calls for judgment, equipped with the full evaluative context and required to provide reasons, does.

Human oversight over an uncertified agent is constitutionally insufficient, because the supervisor lacks the benchmark against which to assess the agent’s output. A human official who oversees a Model B determination can meaningfully evaluate the agent’s subsumtion only if they know that the agent has demonstrated the specific competences required for that normative domain. Without that knowledge, the oversight degrades to rubber-stamping: the official reviews an output without any basis for assessing whether the process that produced it was competent. The agent certification system (Chapter 10) provides the benchmark that meaningful oversight requires. It ensures that the human overseeing the agent knows, at minimum, that the agent has passed a domain-specific, model-specific, test-based examination, and that the examination results are publicly available for review. Oversight is thereby transformed from a procedural formality into a substantive institutional relationship between a qualified human and a qualified machine.

Seen in this light, the OLRF does not automate rights away and then reintroduce human involvement as a form of reassurance. It draws the line in the architecture itself. Deterministic rules may be executed deterministically. Judgment remains judgment. The significance of this distinction is hard to overstate. It is one of the main ways in which the framework keeps automated governance from becoming a slow and institutionally normalised transfer of legal authority from public law to software behaviour.

The Right to an Effective Remedy

A legal remedy is only effective if the person affected can understand what happened well enough to challenge it. That principle sounds obvious, yet it is precisely where many automated systems fail. They generate an outcome, but not an intelligible pathway from norm and fact to result. The citizen is told what the system concluded, but not why. The court is shown a decision, but not the legal reasoning that produced it. In such circumstances review may exist in theory while becoming hollow in practice11.

The right to an effective remedy therefore depends on more than formal access to a tribunal. It depends on the availability of an evidentiary record that permits genuine review. That record must show which norm was applied, which facts were treated as decisive, which exceptions were considered, which version of the executable specification was in force, and where, if anywhere, human judgment entered the process. Without this material, a person cannot identify the relevant ground of challenge, a representative cannot formulate it with precision, and a court cannot meaningfully test the legality of the act under review.

The Decision Package is the OLRF’s answer to this problem. Its significance is not merely technical. It gives the right to a remedy an institutional object. Rather than forcing the affected person to rely on post hoc reconstructions or generic documentation about how the system usually operates, it provides the structured record of how this decision was produced in this case.

The three models produce different evidentiary records, and the right to effective remedy requires that the record be sufficient regardless of the model. Under Model A, the Decision Package records the evaluation path: facts, conditions, exceptions, parameters, effects. This is the most transparent form, because every step is deterministic and reproducible. A court can re-execute the evaluation with the same inputs and verify whether the same output results. Under Model B, the Decision Package additionally records the agent’s subsumption, the validation framework’s assessment, and any deviation classification. The court can assess not only whether the tree was applied correctly, but whether the agent’s subsumption was within permissible bounds and whether the deviation, if any, was properly classified. Under Model C, the Decision Package records the agent’s full reasoning chain and the retrospective audit’s assessment. This is the most demanding form for judicial review, because the court must assess not a deterministic evaluation path but an AI-generated reasoning process. The audit assessment provides the structured framework for this review, but the court retains the authority to disagree with the audit’s conclusion and to find the agent’s reasoning legally deficient even where the audit protocol approved it12.

This also explains why the explanation layer matters. Different participants in the review process require different forms of intelligibility. The citizen needs an explanation they can understand. The legal representative needs a formulation that reveals possible grounds of challenge. The court needs a complete and formally reliable record. The authority needs an auditable trace. These are not competing requirements. They are different interfaces to the same underlying decision record. Rights are strengthened, not weakened, when explanation becomes a structured feature of the architecture rather than an afterthought appended after complaint or litigation

Non Discrimination: From Bias Detection to Structural Prevention

The discussion of discrimination in digital systems often begins with biased models, distorted training data, and proxy variables that reproduce social inequality under the appearance of neutrality. Those concerns are real. But they are not the whole issue. In public law, the deeper question is whether the architecture of decision-making makes discrimination harder to conceal, easier to detect, and more amenable to challenge13.

The OLRF contributes to this in a distinctive way. Its core normative artefact is not a model inferred from historical practice, but a formal specification derived from the legal norm itself. That matters because many harmful biases in machine learning systems arise from the fact that they learn from past decisions, including past distortions, exclusions, and inequalities. A deterministic Decision Tree does not learn from historical discrimination in that way. It does not generalise from patterns in a dataset. It applies the norm that has been formally specified and publicly linked to legal text.

That is a major advantage, but it is not a complete safeguard. A discriminatory law can still be specified faithfully. A formally neutral rule can still have discriminatory effects. And the choices made in constructing the executable layer can themselves matter greatly. If a legally relevant exception is not modelled, if a category is oversimplified, or if a discretionary element that often protects vulnerable groups is excluded from automation without adequate justification, the resulting system may still produce systematically unequal outcomes. The difference is that under the OLRF such choices need not remain invisible. They can appear in the Coverage Map, in the public Decision Tree, in the linkage to the norm, and in the pattern of Decision Packages that the system produces.

The three models present different discrimination risk profiles. Under Model A, the discrimination risk is concentrated in the tree: a condition that uses a proxy variable correlated with a protected characteristic will produce discriminatory outcomes systematically, but the proxy is visible in the published tree and can be challenged. Under Model B, the Legal Agent’s subsumption introduces a new discrimination vector: the agent may classify facts in ways that correlate with protected characteristics, even if the tree itself is formally neutral. The deviation record makes this detectable, because systematic deviations for cases involving particular demographic characteristics will appear in the population-level audit. Under Model C, the discrimination risk is most diffuse and hardest to detect: a language model reasoning from statutory text may produce individually reasonable outputs that, in aggregate, systematically disadvantage particular groups. The retrospective audit must include discrimination testing across the population of outcomes, not merely assessment of individual reasoning chains14.

That visibility is constitutionally important. A discriminatory effect that remains hidden inside a proprietary model is difficult to contest and often impossible to reconstruct. A discriminatory effect that can be traced to a publicly inspectable implementation choice stands in a different relation to law. It can be seen, argued about, challenged by equality bodies or civil society actors, and ultimately reviewed by courts. The OLRF does not promise a world without discrimination. No legal architecture can do that. What it does promise is a shift from opaque discrimination to contestable discrimination, and that shift is a precondition for any serious regime of rights protection in automated administration.

The AI Act and Compliance by Architecture

The AI Act is often discussed as if it were an additional compliance burden that public administrations will somehow have to accommodate once their systems are already built. That framing is too superficial. Read more carefully, the Act points toward an architectural conclusion. Systems used in high-risk public settings must be transparent, robust, governable, capable of human oversight, and monitorable after deployment. Those are not merely documentation duties. They are properties that must be designed into the system if they are to be real15.

This is why the relationship between the OLRF and the AI Act is more interesting than ordinary regulatory compliance. The framework does not simply bolt AI Act requirements onto an otherwise conventional system. It is organised around the same underlying constitutional logic. Deterministic execution supports robustness and consistency. The Registry and sub-normative linkage support transparency and traceability. The Decision Package supports accountability and review. Discretion Points operationalise human oversight where the law requires it. Coverage Maps make the limits of automation explicit. Monitoring across populations of decisions becomes possible because the architecture preserves a structured and verifiable record of what the system has done.

The three models map onto the AI Act’s requirements at different levels of stringency. Under Model A, the AI system’s role is limited to fact-finding, and the normative evaluation is performed by a deterministic engine. The AI Act’s requirements for transparency (Article 13) and human oversight (Article 14) are satisfied by the Decision Tree’s public specification and the Discretion Point architecture. Under Model B, the Legal Agent performs normative reasoning that is subject to validation. The AI Act’s requirements for accuracy (Article 15) and risk management (Article 9) apply with full force to the agent, because its subsumption is a high-risk AI function. The validation framework constitutes the risk management system that Article 9 requires. The deviation record constitutes the transparency mechanism that Article 13 requires. Under Model C, the autonomous agent is itself the high-risk AI system. Every requirement of the AI Act (risk management, data governance, transparency, human oversight, accuracy, robustness, and post-market monitoring) applies directly to the agent’s reasoning, and the retrospective audit must be designed to satisfy all of them16.

The convergence of constitutional and regulatory compliance is not a separate layer added on top of the OLRF’s design. It emerges from the same design choices that make machine-executable law constitutionally credible in the first place. A system designed to ensure proper exercise of discretion under §40 VwVfG, with full documentation of reasoning under §39 VwVfG, and with the separation of AI assistance from normative authority that §35a VwVfG requires, is also a system that satisfies the AI Act’s requirements for human oversight, transparency, accuracy, and risk management. This convergence is not coincidental. The AI Act’s requirements for high-risk systems are the European regulatory expression of the same constitutional commitments that the German administrative law framework articulates in more granular doctrinal form. The OLRF does not comply with the AI Act by adding compliance mechanisms to a pre-existing architecture. It complies because the architecture was designed, from the outset, around the principles that the AI Act codifies.

That does not mean that every legal or organisational question disappears. A system may still be poorly governed, insufficiently resourced, or politically misused. But it does mean that the central demands of the AI Act are not treated as external constraints reluctantly satisfied after the fact. They emerge from the same design choices that make machine-executable law constitutionally credible in the first place. In that sense, the OLRF offers something stronger than compliance by paperwork. It offers the prospect of compliance by construction.

The AI Act’s conformity assessment (Article 43) and the OLRF’s agent certification operate at different levels but converge on the same demand. The AI Act certifies the system’s general capabilities: robustness, data governance, transparency, oversight provisions. The OLRF certifies its specific qualification for a defined normative domain under a defined model. An AI system that has passed the AI Act’s conformity assessment is not thereby qualified to perform subsumtion under German tax law. An agent that holds the OLRF’s Model B certification for German tax law has demonstrated a normative competence that the AI Act does not assess and was not designed to assess. The two assessments are complementary: the AI Act assessment is a necessary horizontal condition, the OLRF certification is an additional vertical condition. Neither replaces the other. Together, they ensure that an agent exercising normative functions has been assessed both as a general AI system and as a specific normative actor (Chapter 10).

Conclusion: Fundamental Rights as Design Constraints

The central lesson of this chapter is that fundamental rights do not stand outside digital governance as moral aspirations to be remembered once the engineering is complete. They are design constraints from the beginning17. They determine what may be automated, what must remain contestable, what must remain visible, what data may be processed, where judgment must remain human, and what evidentiary record must exist if the person affected is to remain a bearer of rights rather than a passive object of administration.

The three models do not dilute these constraints. They distribute them across different architectural mechanisms. Under Model A, rights are protected by the precision of the specification: the tree must be correct, complete, and publicly reviewable. Under Model B, rights are protected by the rigour of the validation: the framework must catch deviations that cross constitutional thresholds, and the deviation record must make the agent’s reasoning contestable. Under Model C, rights are protected by the depth of the audit: the retrospective assessment must be rigorous enough to detect systematic violations that no individual case review would reveal, and the agent’s certification must ensure that only systems meeting defined constitutional standards are permitted to reason autonomously from law.

Human dignity is preserved when the individual is not reduced to the profile, regardless of whether the reduction would be performed by a tree (A), an agent’s subsumption (B), or a language model’s reasoning (C). Data protection is preserved when processing remains norm-bound, minimised, and auditable, with the scope of permissible data access calibrated to the model’s requirements. Effective remedy is preserved when the reasoning path can actually be examined, whether that path is a deterministic evaluation record (A), an annotated deviation record (B), or a full reasoning chain with audit assessment (C). Non-discrimination is strengthened when the implementation of the norm becomes visible and challengeable, with discrimination testing calibrated to the model’s risk profile: structural review of the tree (A), population-level deviation analysis (B), or systematic outcome testing across demographic groups (C). Human oversight remains real when the law’s own demand for judgment is recognised as a boundary that automation may not cross, with Article 22 GDPR applied with varying stringency across the three models. And the AI Act’s requirements become operational when the system is built around those same constraints from the start, with compliance obligations escalating in proportion to the degree of AI autonomy.

A digital state worthy of its name will not be one in which rights are invoked in mission statements while decisions are made in opaque technical systems that no affected person can meaningfully confront. It will be one in which the exercise of public power through software remains answerable, at every decisive point, to the same constitutional order that binds public power in every other form. That is the standard against which automated governance should be judged. It is also the standard that the OLRF is designed to meet.

Footnotes

  1. The requirement that the three instruments be read as a coherent framework rather than as separate regulatory layers reflects the CJEU’s approach in C-311/18, Schrems II (2020), in which the Court assessed the adequacy of data protection not by reference to the GDPR alone but by reference to the Charter rights that the GDPR is designed to implement. For the convergence of Charter, GDPR, and AI Act in the context of automated public decision-making: Veale, M. and Zuiderveen Borgesius, F., “Demystifying the Draft EU Artificial Intelligence Act”, Computer Law Review International, Vol. 22, No. 4, 2021, pp. 97 ff.

  2. The convergence thesis, that the Charter, the GDPR, and the AI Act express a single constitutional demand in different regulatory registers rather than imposing separate and potentially conflicting obligations, is not self-evident. A superficial reading might suggest tension: the Charter protects rights at the highest level of abstraction, the GDPR operationalises data protection through detailed procedural requirements that can conflict with transparency objectives, and the AI Act introduces a risk-based regulatory framework whose categories do not always map neatly onto fundamental rights doctrine. The convergence becomes visible only when the three instruments are read teleologically, from their shared purpose rather than from their textual surface. That shared purpose is the preservation of the individual’s legal subjectivity in the face of systems whose scale and opacity threaten to reduce that subjectivity to a residual formality. The European Court of Justice has increasingly adopted this integrated reading. In C-311/18, Schrems II (2020), the Court assessed the adequacy of a data transfer framework not by reference to the GDPR’s procedural requirements alone but by reference to the Charter rights (Arts. 7, 8, 47) that the GDPR is designed to implement, holding that procedural compliance without substantive rights protection is constitutionally insufficient.

  3. The distribution of constitutional risk across different components of a decision-making system, rather than its concentration in a single point, is a recognised problem in the accountability literature. Nissenbaum terms it the “problem of many hands”: when multiple actors contribute to an outcome, each contributing actor may act within acceptable bounds while the aggregate result violates rights that no single actor can be held responsible for. The three-model taxonomy translates this into architectural terms. In Model A, accountability is concentrated (the tree is the single point of failure). In Model B, it is distributed (the agent and the tree share responsibility, with the validation framework as the accountability bridge between them). In Model C, it is diffuse (the agent reasons autonomously, and the audit protocol must reconstruct accountability retrospectively). The constitutional adequacy of each model depends on whether the accountability mechanism matches the risk distribution: concentrated risk requires precise specification review, distributed risk requires validation rigour, diffuse risk requires population-level audit. See: Nissenbaum, H., “Accountability in a Computerized Society”, Science and Engineering Ethics, Vol. 2, No. 1, 1996, pp. 25 ff.; for the application to AI systems in public administration: Bovens, M. and Zouridis, S., “From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control”, Public Administration Review, Vol. 62, No. 2, 2002, pp. 174 ff.

  4. Art. 1 Abs. 1 GG; Art. 1 EU-Grundrechtecharta. The Bundesverfassungsgericht has consistently derived from human dignity a prohibition on treating individuals as mere objects of state action: BVerfGE 30, 1 (26, Abhörurteil, 1970), the foundational Objektformel; BVerfGE 115, 118 (153, Luftsicherheitsgesetz, 2006). For the application to automated administration: Martini, M., Blackbox Algorithmus: Grundfragen einer Regulierung Künstlicher Intelligenz, Springer 2019, pp. 78 ff., arguing that human dignity requires not merely that a human be “in the loop” but that the system preserve the capacity to recognise the individual as more than the sum of their data points.

  5. The distinction between structural reduction (Model A), forced re-categorisation (Model B), and simulated engagement (Model C) is, to our knowledge, novel. It draws on Hildebrandt’s analysis of the difference between “being profiled” (subjected to a statistical classification) and “being addressed” (encountered as a legal subject with rights): Hildebrandt, M., Smart Technologies and the End(s) of Law, Edward Elgar 2015, pp. 172 ff. Model C’s risk of simulated engagement (output that reads as contextually sensitive but is statistically generated) corresponds to what Bender et al. term “stochastic parrots”: systems that produce linguistically fluent output without the semantic comprehension that would make genuine engagement with individual circumstances possible. See: Bender, E. M. et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, Proceedings of FAccT 2021, pp. 610 ff.

  6. BVerfGE 65, 1 (Volkszählung, 1983), deriving the right to informational self-determination from Art. 2 Abs. 1 i.V.m. Art. 1 Abs. 1 GG; GDPR Art. 5(1)(b) (purpose limitation), Art. 5(1)(c) (data minimisation), Art. 5(1)(e) (storage limitation). For the constitutional foundations: Simitis, S., “Die informationelle Selbstbestimmung: Grundbedingung einer verfassungskonformen Informationsordnung”, NJW 1984, S. 398 ff.

  7. The escalating data access profile across the three models creates an escalating data protection burden. The principle of data protection by design (Art. 25 GDPR) requires that the technical architecture enforce the data minimisation principle at the system level. Under Model A, this is straightforward: the DataPoint Schema specifies the permissible data universe exhaustively. Under Models B and C, data protection by design requires additional architectural controls: access logging, purpose-bound retrieval constraints, and retrospective audit of whether the agent’s data access was proportionate to the normative task. See: Article 29 Working Party, “Guidelines on Data Protection Impact Assessment”, WP 248, 2017; Bygrave, L. A., “Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements”, Oslo Law Review, Vol. 4, No. 2, 2017, pp. 105 ff.

  8. The problem of institutional drift from assistance to substitution is analysed in: Parasuraman, R. and Riley, V., “Humans and Automation: Use, Misuse, Disuse, Abuse”, Human Factors, Vol. 39, No. 2, 1997, pp. 230 ff.; Skitka, L. J., Mosier, K. L., and Burdick, M., “Does Automation Bias Decision-Making?”, International Journal of Human-Computer Studies, Vol. 51, No. 5, 1999, pp. 991 ff. For the legal dimension: Citron, D. K., “Technological Due Process”, Washington University Law Review, Vol. 85, 2008, pp. 1249 ff., arguing that the replacement of human judgment by automated systems without adequate procedural safeguards constitutes a violation of due process.

  9. The doctrinal question whether a decision produced by a deterministic evaluation of a publicly authored Decision Tree constitutes a decision “based solely on automated processing” under Art. 22(1) GDPR is contested. The narrow reading holds that “solely” refers to the absence of any human intervention in the individual case, regardless of human involvement in the system’s design: Mendoza, I. and Bygrave, L. A., “The Right Not to Be Subject to Automated Decisions Based on Profiling”, in Synodinou, T. et al. (eds.), EU Internet Law, Springer 2017, pp. 77 ff. The broader reading holds that human authorship of the normative specification (the Decision Tree) and human oversight of its publication in the Registry constitute meaningful human involvement: Wachter, S., Mittelstadt, B., and Russell, C., “Counterfactual Explanations without Opening the Black Box”, Harvard Journal of Law and Technology, Vol. 31, No. 2, 2018, pp. 841 ff. (arguing that the relevant question is whether the affected person has access to an intelligible explanation, not whether a human physically pressed a button). The OLRF’s design supports both readings: under the narrow reading, Discretion Points provide the case-level human intervention that Article 22 requires; under the broad reading, the dual publication model provides the system-level human authorship.

  10. Art. 22(2) GDPR permits solely automated decision-making where authorised by Union or Member State law providing suitable safeguards (lit. b). §35a VwVfG constitutes such authorisation for fully automated administrative acts, but its scope is limited to decisions involving no discretion and no assessment margins (“weder ein Ermessen noch einen Beurteilungsspielraum”). Model C determinations, which involve autonomous normative reasoning, fall outside the scope of §35a VwVfG and therefore cannot rely on Art. 22(2)(b) GDPR through §35a. They require either explicit consent under Art. 22(2)(a) or a separate legislative authorisation with suitable safeguards. See: Prell, L., “§35a VwVfG und der ‘vollständig automatisierte Erlass eines Verwaltungsaktes’”, NVwZ 2018, pp. 1255 ff.; Berger, A., “Der automatisierte Verwaltungsakt”, NVwZ 2018, pp. 1260 ff. For the interaction between §35a VwVfG and the data protection requirements for automated individual decisions under Art. 22 GDPR from the perspective of public administration specifically: Braun Binder, N., “Automatisierte Entscheidungen: Perspektive Datenschutzrecht und öffentliche Verwaltung”, SZW 2020, arguing that the two regimes (administrative procedure law and data protection law) impose overlapping but not identical requirements on automated decision-making, and that compliance with §35a VwVfG does not automatically satisfy Art. 22 GDPR. The OLRF’s three-model architecture addresses this overlap by design: Model A satisfies §35a (bound decision, deterministic evaluation) and, through the Discretion Point mechanism, provides the “meaningful human involvement” that the narrow reading of Art. 22 requires; Models B and C, which fall outside §35a, require either explicit consent under Art. 22(2)(a) or a separate legislative authorisation with suitable safeguards under Art. 22(2)(b).

  11. Art. 47 EU-Grundrechtecharta (right to an effective remedy and to a fair trial); Art. 19 Abs. 4 GG (Rechtsschutzgarantie); CJEU C-222/86, Heylens, 1987, para. 15 (effective judicial protection requires sufficient reasons to enable judicial review). For the specific challenge of judicial review of automated decisions: Doshi-Velez, F. et al., “Accountability of AI Under the Law: The Role of Explanation”, Berkman Klein Center Working Paper, 2017; Selbst, A. D. and Barocas, S., “The Intuitive Appeal of Explainable Machines”, Fordham Law Review, Vol. 87, 2018, pp. 1085 ff.

  12. The proposition that a court retains the authority to disagree with the retrospective audit’s assessment and to find the agent’s reasoning legally deficient on independent grounds follows from the constitutional principle that judicial review cannot be delegated to or constrained by technical systems. See: BVerfGE 101, 106 (Akteneinsichtsrecht, 2000), holding that the right to effective judicial protection requires that the court have access to the complete evidentiary basis of the decision, not merely a summary or assessment produced by the deciding authority. For Model C, this means that the court must have access to the agent’s full reasoning chain, not merely to the audit protocol’s conclusion about that chain.

  13. Art. 21 EU-Grundrechtecharta (non-discrimination); Art. 3 Abs. 3 GG (Diskriminierungsverbote); for the application to algorithmic systems: Barocas, S. and Selbst, A. D., “Big Data’s Disparate Impact”, California Law Review, Vol. 104, 2016, pp. 671 ff.; European Commission, “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust”, COM(2020) 65 final, pp. 10 ff. For the Dutch Toeslagenaffaire as a case study of how automated systems can produce systematic discrimination through facially neutral rules: “Ongekend Onrecht”, Report of the Parliamentary Inquiry Committee (Parlementaire ondervragingscommissie Kinderopvangtoeslag), Tweede Kamer 2020; Amnesty International, “Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal”, 2021.

  14. The distinction between concentrated discrimination risk (Model A), subsumption-mediated discrimination (Model B), and diffuse aggregate discrimination (Model C) requires different testing methodologies. For Model A, the appropriate method is structural review of the Decision Tree for proxy variables and facially neutral conditions with discriminatory effects (disparate impact analysis). For Model B, the appropriate method is population-level analysis of the deviation record, testing whether deviations correlate with protected characteristics. For Model C, the appropriate method is outcome testing across demographic groups, similar to the techniques used in fair lending analysis under the US Equal Credit Opportunity Act. See: Kleinberg, J., Mullainathan, S., and Raghavan, M., “Inherent Trade-Offs in the Fair Determination of Risk Scores”, Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS), 2017; Chouldechova, A., “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments”, Big Data, Vol. 5, No. 2, 2017, pp. 153 ff.

  15. Regulation (EU) 2024/1689 (AI Act), Arts. 9 (risk management), 13 (transparency), 14 (human oversight), 15 (accuracy, robustness, cybersecurity). For the architectural reading: Veale, M. and Zuiderveen Borgesius, F., op. cit.; Ebers, M. et al., “The European Commission’s Proposal for an Artificial Intelligence Act: A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)”, J, Vol. 4, No. 4, 2021, pp. 589 ff.

  16. The escalating AI Act compliance burden across the three models follows from the Act’s risk-based approach (Art. 6, Annex III). Under Model A, the AI system (the fact-finding agent) is an input provider to a deterministic evaluation, and its high-risk classification depends on the domain (social benefits, law enforcement, etc.). Under Model B, the Legal Agent performs a high-risk function (normative subsumption in a legally consequential context), and the validation framework constitutes the risk management system required by Art. 9. Under Model C, the autonomous agent is itself the high-risk AI system in the most direct sense, and the full conformity assessment procedure of Art. 43 applies. The audit protocol must be designed to satisfy Art. 9 (risk management through structured assessment), Art. 13 (transparency through reasoning chain preservation), Art. 14 (human oversight through the approval step), and Art. 15 (accuracy through outcome testing).

  17. The proposition that fundamental rights function as design constraints rather than as compliance additions has two intellectual roots that converge in the OLRF. The first is the “value-sensitive design” tradition in information systems research, which holds that human values must be integrated into the technical design process from the earliest stages rather than being assessed after the system is built. Friedman and Kahn demonstrated that values such as autonomy, fairness, and accountability are not extrinsic to technical systems but are embedded in their architecture whether or not the designers intended it, and that conscious value-sensitive design produces architectures that protect those values more reliably than retrospective correction (Friedman, B. and Kahn, P. H., “Human Values, Ethics, and Design”, in Jacko, J. and Sears, A. (eds.), The Human-Computer Interaction Handbook, Lawrence Erlbaum 2003, pp. 1177 ff.; Friedman, B., Kahn, P. H., and Borning, A., “Value Sensitive Design and Information Systems”, in Zhang, P. and Galletta, D. (eds.), Human-Computer Interaction in Management Information Systems, M. E. Sharpe 2006, pp. 348 ff.). The second root is the tradition of strategic legal design that Breidenbach has developed over two decades of collaboration with the German Federal Ministry of Justice. Breidenbach’s central argument is that the quality of a law is determined not at the moment of its textual enactment but at the moment of its structural design: when the decision logic is visualised, the exceptions modelled, the thresholds specified, and the discretionary spaces identified. A Rulemap that makes the subsumption structure of a norm visible also makes visible the points at which rights are protected or endangered by the norm’s design. An exception that safeguards vulnerable populations is a structural feature of the Rulemap, not an afterthought. A Discretion Point that preserves space for individual circumstance is a design decision, visible in the map before a single case is processed. In this sense, Breidenbach’s method anticipates at the level of legislative design what the OLRF enforces at the level of execution architecture: the insight that constitutional quality is a property of the structure, not a label applied to the output. See: Breidenbach, S., Was Gesetze sein könnten: Mit Methode zum guten Gesetz, C. H. Beck, München 2025, insbes. Kap. 3 (Visualisierung der Entscheidungslogik) und Kap. 5 (Digitale Gesetzgebung als strategischer Prozess); Breidenbach, S., Vortrag “Law as Code”, Max-Planck-Institut für ausländisches und internationales Privatrecht, Speaker Series “Artificial Justice”, Mai 2025.