Chapter 8
The Interface Layer — How AI Systems Access the Normative Infrastructure
Last updated: 2026-04-10 Open for review
“Code can make behavior simply impossible, where law can only make it punishable.” Lawrence Lessig, Code and Other Laws of Cyberspace, Basic Books 1999
The Shift from Passive Tools to Active Agents
AI systems are no longer just reactive tools that wait for queries prompts. They are becoming autonomous actors that reach into external systems, retrieve data, invoke services, and trigger processes on their own initiative. This shift from passive instrument to active agent is already underway, and it will not reverse. The only question is whether it happens under governed conditions or not.
Without formal structure, the result is AI agents whose reach and permissions are defined solely by the developers who built them, accountable to no one beyond their operators. With formal structure, it becomes possible to ensure that these agents act within boundaries that are legally grounded, democratically legitimated, and technically enforced.
The OLRF proposes a formally specified interface through which AI systems access the normative infrastructure. This interface turns the question of what AI systems can reach, what they cannot, and on what terms, from an implementation detail buried in application code into an auditable, constrainable contract between the AI system and the legal order.1 The interface is designed to be compatible with emerging industry standards for AI tool access (notably the Model Context Protocol, published as an open standard in November 2024)2 while imposing the additional constitutional constraints that a legally authoritative infrastructure requires.
5 Tools and their Constitutional Role
The interface exposes five tools to AI clients. Each is precisely scoped to a specific type of interaction between an AI system and the normative infrastructure. The scoping is deliberate and constitutionally significant: each tool does exactly what the division of labour established in Chapter 4: The Controlled Division of Labour Between Humans and AI requires, and nothing more. Together they define the complete surface area of legitimate AI interaction with the normative layer: an interaction that is rich and capable within its defined scope, and constitutionally foreclosed outside it.
Tool 1: Norm Evaluation
This is the operational core of the interface. An AI system submits a structured set of typed, validated facts together with the identifier of the applicable Decision Tree and the moment in time for which the evaluation should be performed. The server resolves the correct version of the Decision Tree from the Registry’s temporal index, submits the facts to the evaluation engine, and returns a complete, cryptographically signed Decision Package. The AI system receives the outcome of the legal evaluation. It does not perform it.
Several aspects of this tool’s design carry constitutional significance. The requirement that submitted facts be typed and validated (matched against the DataPoint schema before they enter the evaluation) enforces the boundary between the AI’s probabilistic world and the normative layer’s deterministic one: facts that do not conform to the schema cannot reach the evaluation engine. The requirement to specify the evaluation moment enables the server to resolve the correct historical version from the Registry and to record that version’s cryptographic hash in the Decision Package. And the requirement that AI-extracted fact values carry an explicit confidence level ensures that courts and auditors can permanently distinguish between facts established through authoritative register queries and facts inferred through machine learning.
In Model A, Tool 1 performs the complete normative evaluation. In Model B, Tool 1 performs the validation of the Legal Agent’s determination against the Decision Tree’s structure. In Model C, Tool 1 performs the retrospective audit mapping. The tool’s interface is identical across all three models. What changes is the function it serves within the overall process.
Tool 2: Norm Query
This tool returns the structural metadata of a Decision Tree without performing an evaluation. An AI system uses it to discover what facts a Decision Tree requires, what types those facts must be, what sources they should be drawn from, where Discretion Points exist, what Parameters are currently in force, and what the Coverage Map specifies about the scope of automated application. The norm query tool enables targeted fact-finding: the AI system understands the normative landscape before it assembles the factual record, ensuring that the right sources are consulted and that the submission is complete.
Tool 3: Fact Validation
Before submitting facts for evaluation, an AI system can submit them for pre-validation against the DataPoint schema of the target Decision Tree. The validation tool checks type conformance, value ranges, source declarations, required confidence levels, and inter-fact consistency constraints. It returns a structured validation report identifying any fact that would be rejected by the evaluation engine, together with the specific schema requirement that the fact fails to meet. This tool serves as a quality gate: it allows the AI system to correct its factual record before the legally consequential evaluation is triggered, reducing the probability of evaluation failures and improving the quality of the data that reaches the normative layer.
Tool 4: Decision Package Retrieval
This tool retrieves a previously produced Decision Package by its identifier, together with the Decision Tree version and Registry entry against which it was evaluated. Its primary use is in appeal and review proceedings: a court, an auditor, or a citizen’s representative retrieves the complete decision record to understand how a specific determination was reached. The tool enforces access controls based on the requesting party’s role and authorisation, ensuring that personal data contained in Decision Packages is accessible only to parties with a legitimate interest.
Tool 5: Decision Explanation
This tool transforms a Decision Package into human-readable explanations at multiple levels of abstraction. The legal register produces a structured explanation in formal legal language, suitable for courts and legal professionals. The administrative register produces a plain-language explanation, suitable for communication to the affected citizen. The citizen register produces a further simplified explanation, optimised for comprehensibility to a person without legal or technical training, including explicit information on rights of appeal and the specific aspects of the decision that are potentially contestable.
The explanation tool is the only point in the interface at which probabilistic AI processing is involved: the natural language generation of the various registers requires a language model. This processing occurs entirely after the normative evaluation is complete, and it produces explanatory text derived from the Decision Package, not normative determinations reached through independent AI reasoning. The Decision Package is specified and cryptographically verified before explanation begins. The AI explains the law’s decision. It does not revisit, reassess, or supplement it.
The Security Model: Read-Only, Logged, Cryptographically Bound
he OLRF MCP server is designed as a constrained legal interface with a deliberately narrow attack surface. Seen from the perspective of an MCP client, the normative layer is read-only. No exposed tool permits creation, modification, deletion, or mutation of Decision Trees, Registry entries, Coverage Maps, or any other normative artefact. The interface supports discovery, validation, evaluation, and explanation. It does not support normative write operations. This is not just a policy enforced by permissions, it has to be an architectural constraint. The absence of write-capable endpoints means that credential misuse, privilege escalation, or client compromise cannot by themselves mutate the normative layer through the MCP path.
The security model of an infrastructure that creates legally binding results, however, requires more than a read-only design. It must establish identity, verify qualification, constrain authority, protect transport, preserve integrity, segment risk, support forensic reconstruction, and remain resilient under failure and attack. The following fifteen controls define this model. They are organised along the five defence layers through which every request must pass, together with the cross-cutting controls that operate across all layers simultaneously.
Layer 1: Perimeter
The perimeter layer protects the communication channel before any application logic is reached.
Control 1: Transport Security and Session Binding.
All traffic must be protected by mutually authenticated secure transport with modern cipher suites and forward secrecy. Session establishment must bind authenticated identity to channel state. Requests must include anti-replay protections such as nonces, timestamps, or one-time request identifiers, and the server must reject stale or replayed submissions. This is especially important where evaluations may trigger downstream legal effects or generate signed decision records. Without replay protection, a valid request could be resubmitted outside its intended temporal and procedural context, producing a legally authentic but operationally fraudulent Decision Package.
Layer 2: Identity and Qualification
The identity layer establishes who is acting and verifies that the actor is qualified to act. It contains four controls. The first three establish identity, authorisation, and delegation. The fourth, Agent Certification Verification, is the most important quality gate in Models B and C and is the point at which the certification system described in Chapter 10 meets the interface architecture.
Control 2: Identity and Trust Establishment.
Every MCP client must authenticate using a credential scheme appropriate to the assurance level of the operation it requests. Discovery operations may require lower assurance. Evaluation operations that generate legally consequential signed records require high assurance. The server must support differentiated assurance levels, including client certificate authentication, qualified electronic signatures under eIDAS, and qualified trust service tokens. Authentication must occur before any request reaches the application layer. Unauthenticated requests must be rejected without exception.
Control 3: Authorisation and Policy Enforcement.
Authentication establishes who is making a request. Authorisation determines what that identity is allowed to do. The MCP server must enforce a machine-readable, scope-specific authorisation model that governs which tools an identity may invoke, which Decision Trees it may access, and which normative domains fall within its operational scope. Authorisation policies must be externally configurable by the responsible authority, time-bounded, revocable, and fully logged. The objective is not merely access restriction. It is constitutional precision: an agent authorised for tax evaluation must not be able to invoke evaluation of a social welfare norm merely because the transport credentials are valid.
Control 4: Delegation and Machine Acting Authority.
Many clients will operate not on their own behalf but under delegated authority from an agency, regulated entity, or certified automated service. The model must therefore support explicit delegation chains that are machine-readable, scope-limited, time-bounded, and revocable. Requests must carry sufficient metadata to establish not only which credential invoked the service, but under whose authority, for what purpose, and within which mandate. This is necessary for meaningful attribution, especially where autonomous or semi-autonomous software agents participate in administrative workflows.
Control 5: Agent Certification Verification.
This control is the most important quality gate in Models B and C, and it operates differently from the other controls: Instead of verifying technical properties like network boundaries, identity credentials, input formats, output integrity, Control 5 verifies normative qualification.
When an agent initiates an interaction with the evaluation engine, Control 5 checks the agent’s current certification status against the Registry. The verification has three elements. First, it confirms that the agent holds a valid, unexpired certification for the specific normative domain of the requested evaluation. An agent certified for income tax subsumtion under Model B is not thereby certified for asylum law evaluation, regardless of its technical capabilities. Second, it confirms that the certification class matches the model under which the norm element is classified in the Coverage Map. An agent holding a Model B certification cannot perform a Model C evaluation. The Coverage Map’s model assignment is the reference: if the element is classified as Model C, only a Model C-certified agent may proceed. Third, it confirms that the certified version of the agent matches the version that is currently operating. An agent that has been updated since its last certification fails Control 5 until recertification is obtained, because the normative capabilities of the updated version have not been assessed3.
If any of these checks fails, the evaluation engine rejects the interaction. The agent cannot proceed. No Decision Package is produced. The rejection is logged in the audit trail with the specific reason for the failure (certification expired, domain mismatch, model mismatch, or version mismatch), and the log entry is available to the supervising authority, auditors, and, where a citizen is affected, to the court reviewing the determination.
Under Model A, Control 5 operates in a reduced form. The agent’s function is limited to fact-finding, and the normative evaluation is performed entirely by the deterministic engine. Control 5 verifies that the agent holds a valid fact-finding capability certification (the lowest tier described in Chapter 10), but it does not verify normative qualification, because no normative reasoning is performed by the agent. The reduced verification reflects the reduced constitutional risk: a fact-finding error is serious, but it is qualitatively different from a normative reasoning error.
Control 5 is therefore not a trust-based check. It is a cryptographic verification of a Registry-published credential, enforced at runtime, with immediate operational consequence. It is the point at which the certification system described in Chapter 10 meets the interface architecture described in this chapter. Without Control 5, the certification system would be a recommendation. With it, the certification system has constitutional teeth: an uncertified agent is not merely discouraged from operating. It is architecturally prevented from producing valid outcomes4.
Layer 3: Input Boundary
The input boundary layer filters everything that enters the normative core. Its purpose is to ensure that only qualified, schema-conformant, and appropriately classified data reaches the evaluation engine.
Control 6: Input Integrity and Trust Classification.
The evaluation engine must never consume unqualified input as if it were legally reliable fact. Every submitted fact must be schema-validated, type-checked, source-declared, and classified by trust status. Facts originating from authoritative registers must be distinguishable from facts originating from user submission, document extraction, probabilistic classification, or AI-assisted inference. Where facts are derived from uncertain upstream processes, the system must require confidence metadata and enforce threshold-based escalation to human review before evaluation can proceed. This is both a correctness control and a security control. It prevents uncertain or manipulated upstream outputs from entering the normative layer as if they were authoritative inputs.
Control 7: Data Protection and Minimisation.
The interface processes potentially sensitive case facts and therefore requires strict controls on data minimisation, purpose limitation, retention, and access segregation. Only data strictly necessary for a requested evaluation or explanation may be processed. Sensitive attributes must be compartmentalised where possible. Audit trails must avoid unnecessary duplication of personal data and use pseudonymisation or tokenisation where this does not impair evidentiary requirements. Submitted facts must not be repurposed for model training, profiling, or unrelated analytics unless there is an explicit and lawful basis to do so. Integrity controls alone are not sufficient if the surrounding data handling model remains over-permissive.
Control 7 operationalises the data protection by design and by default requirement of Article 25 GDPR at the interface level. The provision requires that the controller implement appropriate technical and organisational measures designed to implement data protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing. The OLRF’s interface architecture is unusually well positioned to satisfy this requirement, because the Decision Tree’s DataPoint Schema specifies in advance, and with legal precision, exactly which facts are required for the evaluation. The schema functions as a machine-enforceable data minimisation policy: facts not specified in the schema are rejected at Control 6 before they reach the evaluation engine, and the purpose limitation is structural rather than contractual. This is a stronger implementation of Article 25 than most administrative systems achieve, where data minimisation is typically enforced through policy documents and manual compliance rather than through architectural constraints. The prohibition on repurposing submitted facts for model training is constitutionally significant under Models B and C, because the Legal Agent’s provider may have a commercial interest in using case data to improve its foundation model5
Control 8: Abuse Resistance and Boundary Defence.
The interface must treat prompt injection, schema smuggling, parser confusion, tool misuse, context poisoning, and malformed structured input as first-class threats. All inputs must be validated against strict schemas. Unsupported fields must be rejected rather than ignored. The explanation layer must consume only canonical, signed outputs from the deterministic core and must not be permitted to trigger privileged side effects. Any component interacting with AI-generated content must operate under explicit trust-reduction assumptions. This is essential to ensure that probabilistic or adversarial content cannot cross the boundary into legally operative logic.
Control 8 addresses a threat class that is specific to AI-mediated legal infrastructure and that has no direct precedent in traditional administrative security models. Prompt injection, the technique by which adversarial content embedded in input data causes an AI system to deviate from its intended instruction, has been demonstrated to be effective against every major language model family, and no complete defence is currently known6
Layer 4: Core
The core layer contains the evaluation engine and the normative artefacts it operates on. It is the most protected zone of the architecture.
Control 9: Service Segmentation and Runtime Isolation.
The evaluation engine, the explanation layer, the registry resolver, the signing service, and the audit subsystem must operate as separate trust zones with explicit interface contracts and minimal privileges. The explanation layer, especially where backed by probabilistic AI components, must be logically and operationally isolated from the evaluation path. It may consume signed evaluation outputs, but it must never be able to modify them, inject facts back into the evaluation context, or alter normative artefacts. Signing operations must be separated from general application runtimes and restricted to dedicated trust services. This segmentation limits blast radius and prevents compromise of a lower-assurance component from cascading into the legally authoritative core.
Layer 5: Output
The output layer ensures that every artefact leaving the core is cryptographically bound to its provenance, its content, and the normative context that produced it.
Control 10: Cryptographic Binding of Artefacts and Outputs.
Every authoritative artefact in the chain must be cryptographically bound to its provenance and content. Decision Trees must be signed at publication. Registry entries must be integrity-protected and version-identifiable. Decision Packages must be signed over the submitted facts, the exact Decision Tree version used, the evaluation timestamp, the service identity, and the produced effects. Under Models B and C, the Decision Package additionally includes the agent’s certification credential identifier, binding the determination to the specific qualified agent that produced it. This ensures tamper evidence, supports independent verification, and prevents fabrication of valid-looking outputs outside the authorised service boundary. Cryptographic binding is what allows later reviewers to establish not only what decision was produced, but exactly which normative artefact produced it, under which trust context, and by which certified agent.
The cryptographic binding requirement implements, at the technical level, the legal principle that every administrative act must be attributable to the authority that issued it and must be verifiable in its integrity at any later point, including years or decades after issuance. In German administrative law, this principle is expressed in §37 VwVfG (form requirements for administrative acts, including the requirement that the issuing authority be identifiable) and in §415 ZPO (evidentiary value of public documents, which depends on the document’s integrity and attributability). The OLRF extends these requirements to machine-produced artefacts by replacing the handwritten signature and the institutional letterhead with cryptographic equivalents whose verification is mathematically rather than institutionally grounded. The signing model follows eIDAS Article 25 ff. (Regulation (EU) No 910/2014, as amended by Regulation (EU) 2024/1183), which establishes the legal framework for electronic signatures, seals, and timestamps in the European legal order. The specific requirement that Decision Packages be signed over the submitted facts, the exact Decision Tree version, the evaluation timestamp, the service identity, and the produced effects (and, under Models B and C, the agent’s certification credential identifier) ensures that the signed object is not merely the output but the complete evaluation context. This matters for judicial review: a court examining a challenged determination can verify not only what the system concluded but which version of which normative artefact produced the conclusion, on the basis of which facts, at which moment, and (under Models B and C) by which certified agent. The inclusion of the agent certification credential in the signed package under Models B and C is constitutionally significant because it binds the determination to a specific qualified actor. A Decision Package signed without a certification credential under Model B or C would be formally valid as a cryptographic object but constitutionally deficient as a legal artefact, because the qualification of the normative actor would be unverifiable. The canonical serialisation of the signed object follows RFC 8785 (JSON Canonicalization Scheme), which ensures that the same logical content produces the same byte sequence regardless of the serialisation environment, a precondition for reproducible signature verification across different systems and jurisdictions7.
Control 11: Key Management.
No signature-based trust model is credible without explicit controls for key generation, storage, activation, rotation, suspension, revocation, and recovery. Keys used for signing normative artefacts and Decision Packages must be held in certified hardware security modules or equivalent qualified trust service infrastructure. Administrative access to signing keys must be tightly restricted, dual-controlled where appropriate, and fully logged. Compromise response procedures must include immediate revocation propagation, service suspension where required, and verifiable re-establishment of trust anchors. In the OLRF context, key compromise is not an operational inconvenience. It is a potential integrity failure of the legal infrastructure itself.
Vertical Controls
The following controls operate across the entire request lifecycle and across all layers simultaneously.
Control 12: Audit Logging and Non-Repudiation.
Every security-relevant event must be recorded in an append-only, tamper-evident audit trail. This includes authentication attempts, authorisation decisions, certification verification checks (including rejections with specific reasons), tool invocations, fact validation requests, evaluation executions, explanation requests, registry resolutions, key management events, rate limit actions, policy denials, service errors, and administrative interventions. Logs must be timestamped using a trusted time source, integrity-protected, and retained according to legal and supervisory requirements. Audit records must support both operational forensics and evidentiary use. In a legally consequential environment, logging is not for troubleshooting. It is part of the accountability model.
Control 13: Availability, Rate Control, and Resilience.
Because the MCP server mediates access to executable law, its availability has legal significance. The design must therefore include denial-of-service protections, credential-scoped and norm-scoped rate limiting, workload isolation between high-volume discovery traffic and legally consequential evaluation traffic, back-pressure mechanisms, and priority handling for critical administrative workflows. Resilience planning must also define degraded operating modes, fail-closed versus fail-safe behaviour for specific functions, recovery time objectives, and continuity procedures for partial outages. A system that preserves integrity but is unavailable under stress still fails if it denies timely access to law or lawful decision processes.
The proposition that the availability of a legal infrastructure has legal significance, not merely operational significance, follows from the constitutional guarantee of timely access to law and to lawful administrative processes. In German administrative law, the principle is expressed in the Beschleunigungsgrundsatz (principle of expedition)8
In the OLRF context, these requirements apply not to the human processing time alone but to the infrastructure on which the processing depends. If the MCP server is unavailable, no evaluation can be performed, no Decision Package can be produced, and no determination can be issued. An infrastructure outage is therefore not merely an IT incident. It is a systemic suspension of the state’s capacity to apply law through its automated systems, affecting every citizen whose case is pending evaluation during the outage. The distinction between fail-closed and fail-safe behaviour is constitutionally consequential. A fail-closed system (which refuses to produce any output when a component is degraded) preserves integrity at the cost of availability: no incorrect determinations are produced, but no determinations are produced at all. A fail-safe system (which continues to operate in degraded mode, perhaps by falling back to a cached Decision Tree version or by routing cases to human processing) preserves availability at the cost of full architectural guarantees. Neither choice is constitutionally neutral. The appropriate behaviour depends on the constitutional weight of the affected norm: for benefit determinations where delay causes immediate hardship, fail-safe with human fallback may be constitutionally required; for enforcement determinations where an incorrect automated outcome could cause irreversible harm, fail-closed may be the only defensible choice. The resilience design must therefore be calibrated to the Coverage Map’s classification: the model assignment, the certification requirement, and the constitutional weight of the norm element determine not only how the system operates under normal conditions, but how it degrades under stress9.
Control 14: Software Supply Chain Security.
Trust in the runtime depends on trust in the build and deployment pipeline. Source repositories, dependency manifests, build environments, deployment artefacts, and configuration baselines must all be governed under a software supply chain assurance model. Artefacts should be reproducibly buildable where feasible, signed before deployment, and verifiable at runtime through attestation or equivalent integrity controls. Dependencies must be pinned, scanned, and reviewed under a formal vulnerability management process. Update paths must themselves be authenticated and auditable. This reduces the risk that a sound architecture is undermined by compromised dependencies or build-time tampering.
Control 15: Incident Response and Forensic Readiness.
The security model has to define procedures for credential compromise, anomalous request patterns, suspected key exposure, integrity alerts, service degradation, and dependency-related incidents. Detection, containment, eradication, recovery, and post-incident review must be operationalised in advance. Forensic readiness requires trusted timestamps, immutable logs, preserved correlation identifiers, and segregation of duties during incident handling. In a legally sensitive environment, it must be possible to determine which requests were authentic, which outputs remain trustworthy, and whether any normative artefact or decision record was affected.
The Two Boundaries
Together, these 15 controls enforce two distinct kinds of boundary.
The first is the capability boundary: the agent can only do what the interface permits. Controls 1 through 4 and 6 through 15 ensure that every interaction is authenticated, authorised, schema-conformant, integrity-protected, logged, and cryptographically bound. They prevent technical overreach: no agent can modify the normative layer, bypass the evaluation engine, fabricate a signed Decision Package, or inject unqualified data into the evaluation path.
The second is the qualification boundary: the agent can only do what it is certified to do. Control 5 ensures that every agent performing normative functions holds a valid, domain-specific, model-specific, version-bound certification, verified at runtime against the Registry. It prevents normative overreach: no agent can perform subsumtion for which it is not certified, operate under a model for which it is not qualified, or continue to operate after its certification has expired or been revoked.
The capability boundary has been a feature of the OLRF draft since its earliest design. The qualification boundary is new. It is the architectural consequence of the recognition, developed fully in Chapter 10: Agent Certification --- The State Examination for Machine Actors, that the constitutional order requires not merely that machine actors be constrained in what they can do, but that they be qualified for what they are permitted to do. The two boundaries operate in concert. An agent that is technically capable but normatively unqualified is stopped at Control 5. An agent that is normatively qualified but technically misbehaving is stopped at the other fourteen controls. Together, they ensure that the interface admits only agents that are both capable and qualified, and that every interaction by such an agent is fully traceable, fully attributable, and fully reviewable10.
The Architectural Guarantee
The security model described above is not designed to make AI systems behave well. It is designed to make constitutional violations structurally impossible through the interface path, regardless of the quality, alignment, or intentions of the AI system on the other side11.
The read-only design of the normative layer means that no AI system, regardless of its capabilities, can modify a Decision Tree, alter a Coverage Map, or produce a signed Decision Package without a genuine evaluation by the normative infrastructure. The typed, validated input boundary means that no AI system can inject unqualified data into the deterministic evaluation engine. The cryptographic binding means that no AI system can fabricate a valid-looking Decision Package outside the authorised service boundary. And the agent capability attestation means that no AI system can access the interface without demonstrating the competence required for the task12.
This is the deeper significance of the interface within the OLRF. It is not merely an interoperability layer or a convenient protocol for tool access. It is proposed as the technical form of a constitutional guarantee: that AI operates within the legal order as a supporting capability, not as a competing source of normative authority. The law remains the system that determines legal effect. The interface is designed so that AI can assist that process without ever displacing it13.
Footnotes
-
For the foundational argument that software architecture is a regulatory modality with normative force comparable to law: Lessig, L., Code: Version 2.0, Basic Books 2006, Chapters 7—8. The OLRF interface operationalises Lessig’s insight in a specific domain: the architecture of the interface does not merely regulate AI behaviour. It constitutes the boundary between probabilistic reasoning and normative authority. ↩
-
Anthropic, Model Context Protocol Specification v1.0, November 2024, https://modelcontextprotocol.io. For the broader context of AI tool-use standardisation: see also Google DeepMind, Agent-to-Agent Protocol (A2A), 2025 (defining how autonomous agents coordinate multi-step workflows). The OLRF interface is designed to be compatible with both protocols while imposing the additional constitutional constraints that a legally authoritative infrastructure requires. ↩
-
The three-element verification at Control 5 (domain match, model match, version match) is designed to prevent three distinct failure modes. Domain mismatch prevents an agent certified in one area of law from operating in another (the equivalent of a tax specialist performing surgery). Model mismatch prevents an agent certified for a lower level of normative autonomy from exercising a higher level (the equivalent of a paralegal performing judicial functions). Version mismatch prevents an agent whose capabilities have changed since certification from operating on the basis of a stale credential (the equivalent of a physician practising under a licence that predates a material change in their competence). Each failure mode represents a different constitutional risk, and each requires a distinct verification step. See: Chapter 10 (Agent Certification), Design Principles 1 (domain-specific), 2 (model-specific), and 3 (version-bound). ↩
-
The principle that constitutional requirements must be enforced architecturally rather than by reliance on voluntary compliance is the OLRF’s application of Lessig’s modality thesis (Code: Version 2.0, Basic Books 2006) and Hildebrandt’s Legal Protection by Design (Legisprudence, Vol. 5, No. 2, 2011, pp. 223 ff.). In the specific context of agent certification, the architectural enforcement at Control 5 ensures that the certification system cannot be circumvented by a provider who deploys an uncertified agent, by an authority that fails to verify certification, or by an agent that misrepresents its credentials. The enforcement is not a policy that can be waived. It is a technical precondition that cannot be bypassed. ↩
-
Bygrave, L. A., “Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements”, Oslo Law Review, Vol. 4, No. 2, 2017, pp. 105 ff.; Article 29 Working Party (now EDPB), “Guidelines on Data Protection by Design and by Default”, Version 2.0, adopted 20 October 2020, paras. 40 ff. (requiring that the technical architecture itself enforce minimisation rather than relying on organisational measures alone) ↩
-
Greshake, K. et al., “Not What You’ve Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection”, Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security (AISec), 2023, pp. 79 ff.; Perez, F. and Ribeiro, I., “Ignore This Title and HackAPrompt: Exposing Systemic Weaknesses of LLMs through a Global Scale Prompt Hacking Competition”, Proceedings of EMNLP, 2023 ↩
-
eIDAS Regulation, Arts. 25 ff. (legal effects of electronic signatures and seals); RFC 8785, Rundgren, B. et al., “JSON Canonicalization Scheme (JCS)”, IETF 2020; for the evidentiary framework: Roßnagel, A. and Schmücker, P. (eds.), Beweiskräftige elektronische Archivierung, Economica 2006, pp. 45 ff. (requirements for long-term evidential integrity of electronically signed documents in the German legal order); for the specific challenge of long-term verification: ETSI TS 119 511 (policy and security requirements for trust service providers providing long-term preservation of digital signatures), which addresses the problem that cryptographic algorithms may become insecure over time, requiring periodic re-signing or timestamping to maintain evidential value across the decades over which administrative determinations may be challenged. ↩
-
§10 VwVfG requires that administrative proceedings be conducted simply, expeditiously, and efficiently. The Bundesverfassungsgericht has held that unreasonable delay in administrative proceedings can itself constitute a violation of fundamental rights, independent of the substantive correctness of the eventual determination (BVerfGE 55, 349 (369, Hess. Richtergesetz, 1980)). Article 41(1) of the EU Charter of Fundamental Rights codifies the right to have one’s affairs handled within a reasonable time as an element of the right to good administration. ↩
-
BSI (Bundesamt für Sicherheit in der Informationstechnik), “IT-Grundschutz-Kompendium”, 2024, Module OPS.1.1.4 (availability management for critical infrastructure); NIST SP 800-34 Rev. 1, “Contingency Planning Guide for Federal Information Systems”, 2010; for the constitutional dimension of availability in digital public services: Schliesky, U., “Digitalisierung der Verwaltung: Vom E-Government zur digitalen Verwaltung”, in Hill, H. et al. (eds.), Digitalisierung in Recht, Politik und Verwaltung, Nomos 2018, S. 45 ff., arguing that the constitutional right to effective administration (Art. 41 EU-Grundrechtecharta) extends to the availability of the digital infrastructure through which administration is conducted. ↩
-
The distinction between capability boundary and qualification boundary has an analogue in aviation safety regulation. The airworthiness certificate (Regulation (EU) No 748/2012) establishes that an aircraft is technically capable of safe flight. The pilot licence (Regulation (EU) No 1178/2011) establishes that the person operating the aircraft is qualified to do so. Neither is sufficient without the other. An airworthy aircraft operated by an unqualified pilot, and a qualified pilot operating an unairworthy aircraft, are both prohibited. The OLRF applies the same logic to machine actors in legal infrastructure: the capability boundary (Controls 1-4, 6-15) is the airworthiness certificate. The qualification boundary (Control 5) is the pilot licence. See also: NIST Cybersecurity Framework 2.0, 2024 (for the general security architecture); ENISA, “Securing Machine Learning Algorithms”, 2021 (for AI-specific security considerations in public infrastructure). ↩
-
Reidenberg, J.R., “Lex Informatica: The Formulation of Information Policy Rules through Technology,” Texas Law Review, Vol. 76, No. 3, 1998, pp. 553—593. Reidenberg demonstrated that technological architectures impose rules on participants that function as a form of regulation independent of, and often more effective than, legal rules: “Technological capabilities and system design choices impose rules on participants” that are “not optional” and “do not require transgressors to be identified and sanctioned after the fact.” The OLRF interface operationalises this insight in a specific constitutional domain: the read-only design, the typed input boundary, and the cryptographic binding do not prohibit constitutional violations. They eliminate the technical means through which such violations could occur. ↩
-
Hildebrandt, M., “Legal Protection by Design: Objections and Refutations,” Legisprudence, Vol. 5, No. 2, 2011, pp. 223—248, which argues that legal protection must be embedded in the design of information systems rather than superimposed through post-hoc compliance mechanisms, and that this embedding is itself a constitutional requirement under the rule of law: “If we accept that technology increasingly regulates our lives, we must also accept that the design of technology is a matter of constitutional concern.” ↩
-
GDPR Art. 25 (data protection by design and by default), which codifies the principle that protective constraints must be built into system architecture rather than left to operational policy, and EU AI Act Art. 9(1) in conjunction with Art. 15 (requiring that high-risk AI systems integrate risk management and cybersecurity measures into their technical design). ↩