In the last three issues of this newsletter I have set out why boards must govern AI systems as agents rather than tools, why the defensibility standard requires contemporaneous evidence rather than retrospective documentation, and what happens when an agentic system operates beyond the boundaries its governance programme assumed were fixed. Anthropic's Mythos disclosure was the case study. The governance gap it exposed was not a technical anomaly. It was a structural one.
This issue turns to the regulatory clock. On 2 August 2026, the EU AI Act's compliance obligations for operators of high-risk AI systems become enforceable. The penalty structure is concrete: up to EUR 35 million or 7 per cent of global annual turnover, whichever is higher, for operators of non-compliant high-risk systems. For a FTSE 350 company with annual turnover of GBP 2 billion, 7 per cent of global annual turnover is GBP 140 million. The deadline does not move. The question is whether the board acts before or after it arrives.
What Article 12 Actually Requires, and What It Does Not
Article 12 of the EU AI Act is the provision most frequently cited in governance documentation as the AI equivalent of the Digital Alibi requirement. It requires operators of high-risk AI systems to ensure that their systems are capable of automatically logging events relevant to the identification of risks and to situations in which the system may not function as intended.
This sounds like the output-moment capture that defensible governance requires. It is not, and the distinction is precise.
Article 12 requires that the system be capable of logging events. The regulation is concerned with identifying when the system is malfunctioning, not with preserving the evidence base of each decision the system contributes to. A system that logs error states, anomalous outputs, and performance degradation satisfies Article 12. It does not produce a defensible decision record for the decisions made between those anomalous events, which is precisely when the output-moment question will be asked.
Article 12 creates the logging infrastructure. It does not produce the Digital Alibi. Compliance with Article 12 is necessary. It is not sufficient. |
The distinction matters because most boards currently believe their AI governance obligations under the EU AI Act are met by their logging and monitoring architecture. They are not. Logging that a system performed within normal parameters on a given day does not record what information picture existed at the specific decision moment, who the named individual accountable for that decision was, or whether a human exercised genuine oversight of the output before it was acted upon. That is the record the FCA, a claimant's solicitor, or a shareholder will request.
The Five Articles That Actually Govern Decision-Level Accountability
The EU AI Act's decision-level governance obligations do not rest on Article 12 alone. Four additional articles create specific requirements that most boards have not yet addressed at the decision level rather than the system level.
Article 9 requires a risk management system covering the entire lifecycle of a high-risk AI system. At the decision level, this means the risk that the information picture at any specific output moment is incomplete or reconstructible must be identified, documented, and mitigated. Most risk management frameworks address this at the model level. They do not address it at the output moment.
Article 13 requires that high-risk AI systems are designed and developed to be sufficiently transparent. Transparency at the decision level means that the basis for a specific output, including the version of the model, the inputs presented to it, and the parameters under which it operated, must be reconstructible by an independent examiner. A model card is not decision-level transparency.
Article 14 requires that high-risk AI systems can be effectively overseen by natural persons during the period in which they are in use. Human oversight at the decision level requires that there is a contemporaneous record that a named individual reviewed a specific output before it was acted upon. Governance frameworks that state that human oversight is in place, without a dated record of each specific oversight act, do not satisfy Article 14 at the decision level.
Article 19 requires operators to register their high-risk AI systems in the EU database and to conduct a conformity assessment. The conformity assessment documentation requires the organisation to demonstrate, with evidence, that the governance obligations across Articles 9, 12, 13, and 14 are met. The evidence is the board-level document that most organisations do not yet have.
Where Most Organisations Currently Sit
Based on governance classification work across regulated organisations in financial services, insurance, and aviation, the majority of FTSE 350 and PE-backed organisations deploying AI in material decisions currently sit at a governance address of A2-E2-C2-R2-M2 or below. The 2 August 2026 deadline requires them to reach A4-E4-C4-R4-M4 across all five facets simultaneously. No other regulatory instrument creates that requirement on all five facets at once. The EU AI Act is the most demanding governance standard yet enacted.
The distance between where most regulated organisations currently sit and where the deadline requires them to be is the governance investment that 104 days remain to close. |
A4-E4-C4-R4-M4 requires named board-level accountability for each material AI system, formal risk classification of all AI systems against the EU AI Act high-risk categories, documented human oversight records at the decision level, full regulatory compliance with Articles 9, 12, 13, 14, and 19, and a governance maturity that has been assessed, documented, and independently verified. Each of those requirements takes time that the calendar has nearly exhausted.
The Concurrent Risk Dimension
The EU AI Act is not the only governance obligation the Digital Alibi gap exposes. The FCA's Senior Managers and Certification Regime, specifically section 66A of the Financial Services and Markets Act 2000, creates personal criminal liability for named Senior Managers who fail to take reasonable steps to prevent regulatory breaches. An AI-assisted lending decision that cannot be reconstructed is not an IT governance failure. It is a personal accountability event for the named Senior Manager who approved the AI system's deployment.
DORA's ICT risk management requirements and GDPR's Article 22 obligations on automated decision-making create additional concurrent obligations for organisations in financial services and for any organisation processing personal data in AI-assisted decisions. The governance programme that closes the EU AI Act gap will, if properly designed, address all of them simultaneously. A governance programme designed only for EU AI Act compliance will not.
What the Board Must Do Before 2 August
The sequence is specific. It cannot be reordered. A board that begins at the wrong phase will not reach compliance before the deadline.
The first requirement is a complete AI systems inventory: every AI system contributing to any decision that could be classified as high-risk under the EU AI Act must be identified, named, and assigned a version identifier. This is not an IT asset register. It is a governance document that establishes which decisions are in scope for the compliance obligations that follow. Shadow AI, systems deployed without formal governance approval, must be included. They are already creating compliance exposure.
The second requirement is a risk classification exercise mapping each identified system to the EU AI Act high-risk categories under Article 9. The categories include AI used in credit decisions, employment, access to essential services, and safety-critical systems. For most regulated financial services organisations, the majority of AI systems contributing to material decisions will be in scope.
The third requirement is the governance address assessment: establishing, with forensic precision, the current A-E-C-R-M governance address for each system in scope. This is the baseline from which the compliance investment must be planned. A board that does not know its current governance address cannot plan the investment required to reach A4-E4-C4-R4-M4 before the deadline.
The fourth requirement is the accountability chain documentation: naming the individuals accountable for each material AI system at board level and recording those accountabilities in board minutes with the specificity the EU AI Act and SM&CR require. This is the governance act that advances the accountability facet from A3 to A4. Without it, no other governance investment closes the compliance gap.
The compliance clock is not a future problem. At 104 days from today, it is a present one. Every board that has not begun the inventory exercise is already late. |
The question that will be asked on 3 August 2026 is the same question that has been asked in boardrooms after every governance failure this newsletter has examined. Not whether the governance framework was in place. Whether the board can produce the contemporaneous record that proves it operated at the decision level, for each specific decision, at the exact moment it was made.
The answer to that question is being built, or not built, right now.
Dr. Ivan Roche FRSS FRSA MInstP is the Founder and Principal Advisor of Otopoetic Limited, an AI governance advisory practice. Otopoetic's Digital Alibi Assessment and Accelerated Protocol are designed to take regulated organisations from their current governance address to EU AI Act compliance before the 2 August 2026 deadline.

