A bank uses an AI model to assist mortgage underwriting decisions.
The model contributes to thousands of approvals each month. Last quarter, a customer brings a complaint that the underwriting outcome breached the Consumer Duty. The Financial Ombudsman refers the case to the FCA. The FCA opens an investigation under section 66A of the Financial Services and Markets Act 2000. The investigator does not write to the firm. The investigator writes to a named Senior Manager.
The letter contains one operational question: what reasonable steps did you take to prevent the breach?
That question is the entire enforcement architecture. Section 66A was drafted before any commercial AI system contributed to a regulated decision. It still applies. It is the doctrine on which every subsequent SM&CR enforcement against an individual rests, and the FCA has been consistent in applying it. The defence is satisfied only by what the named individual personally did: with what evidence, on what date, against what specific information picture.
In the architecture of most current AI governance frameworks, the answer to that question does not exist.
The four elements of the reasonable steps defence
The FCA Handbook at DEPP 6.5 sets out how the regulator assesses reasonable steps. The doctrine has four operative elements, derived from the Authority's enforcement practice and from the case law that developed around the precursor Approved Persons Regime.
The first element is foreseeability. The Senior Manager must have identified, or reasonably ought to have identified, that the activity in question carried regulatory risk. For an AI-assisted decision in a regulated process, foreseeability is not in dispute. The Authority has been publishing AI-related supervisory expectations since 2022. The EU AI Act's high-risk classification of AI in credit decisions has been settled since 2024. The named Senior Manager who approved the deployment cannot now argue that the regulatory exposure was unforeseeable.
The second element is the design of preventive controls. The Senior Manager must have put in place arrangements proportionate to the risk. An AI policy is not a control. A model risk framework is not a control. A control is a specific operational mechanism that produces a recorded action at the moment a regulated decision is made.
The third element is monitoring. The controls must be tested. Their continued effectiveness must be evidenced. A control that was designed in 2024, deployed in 2025, and not retested against 2026 supervisory expectations is not a control the Authority will credit.
The fourth element, and the one that closes the defence, is contemporaneous personal engagement. The Senior Manager must show that, at the relevant time, they personally reviewed the operation of the controls or the output they were governing. The Authority's enforcement decisions in the post-2016 period are explicit on this point. Reliance on the firm's framework is not sufficient. Reliance on the second line of defence is not sufficient. Personal engagement, recorded contemporaneously, is the element on which the defence either holds or fails.
What this requires of an AI-assisted decision
Translate the four elements into the AI decision environment, and the Digital Alibi standard appears in the regulatory text without ever being named. To satisfy reasonable steps for a decision in which an AI system materially contributed, the named Senior Manager must produce evidence of four things: the version of the model in use at the precise decision moment; the inputs and information picture the model was operating against at that moment; the human review, if any, that was applied to the output; and the dated record that the Senior Manager themselves engaged with the governance of those operations contemporaneously, not retrospectively.
Most current AI governance frameworks produce documentary evidence of the first item. Some produce evidence of the second. A small number produce evidence of the third. Almost none produce contemporaneous evidence of the fourth.
A board minute that records that the Senior Manager attended the AI Risk Committee in March 2026 does not evidence that they engaged with the specific decision under examination. A model risk function attestation that the model is operating within tolerance does not evidence that the named Senior Manager reviewed that attestation before relying on it. An ethics committee that meets quarterly produces minutes that do not date to the decision moment. None of these constitute the contemporaneous personal engagement the fourth element of the defence requires.
DEPP 6.5 is not a documentation standard.
It is an evidence standard. The distinction is decisive.
The convergence with the EU AI Act
On 2 August 2026, EU AI Act Article 14 becomes enforceable for operators of high-risk AI systems. The article requires that high-risk systems can be effectively overseen by natural persons during the period in which they are in use. It requires named oversight, recorded oversight, and oversight that is contemporaneous with the operation of the system.
Article 14 is a regulatory obligation on the operator. It is not a personal liability provision. But it produces, as a side effect, exactly the evidence base that section 66A of FSMA 2000 requires. An operator that complies with Article 14 produces the dated record of named human oversight that the named Senior Manager will need if a Consumer Duty enforcement subsequently arrives at their desk.
An operator that does not comply with Article 14 leaves the named Senior Manager personally exposed under the FCA's separate enforcement regime, regardless of any EU AI Act penalty the firm itself may face. The two regimes operate concurrently. They reinforce each other. Their evidence requirements converge on the same artefact: the contemporaneous decision record.
What the named Senior Manager must do this quarter
Four actions, in this sequence, before 2 August 2026.
Identify the AI systems for which you are the named Senior Manager. Do not delegate this exercise to a function. The Authority will write to a named individual; the inventory must be one the named individual personally controls. Shadow AI in your area of responsibility is your exposure.
Establish, in writing and dated, what evidence currently exists of your contemporaneous engagement with each system's governance. If the answer is that the evidence is the firm's framework, the framework is now your liability.
Mandate the production of a Decision Map for each material AI system in your area: the contemporaneous record that captures the four elements of the defence at the moment each material decision is made, not retrospectively.
Record, in the next board or committee minute that addresses AI governance, that you have personally reviewed the Decision Map architecture for each system in your area and that you are satisfied it is operating to the standard the FCA's reasonable steps defence requires. That minute is the first line of your defence.
The named Senior Manager who completes these four actions before 2 August 2026 has the evidence base the defence requires. The named Senior Manager who does not has the framework that will be on trial in their place.
The FCA's enforcement letter, when it arrives, will not contain a question about the firm's AI governance framework. It will contain a question about the named individual's reasonable steps. The two are not the same document. They were not designed to be the same document. The 97-day calendar to the EU AI Act compliance date is the calendar in which the named Senior Manager produces, or fails to produce, the evidence base for the only question that matters.
* * *
Dr. Ivan Roche FRSS FRSA MInstP is the Founder and Principal Advisor of Otopoetic Limited, an AI governance advisory practice. The Otopoetic Decision Map architecture is designed to produce the contemporaneous evidence that the FCA's reasonable steps doctrine and the EU AI Act's Article 14 jointly require. The Digital Alibi Assessment establishes the current governance address of regulated firms in 21 days; the Accelerated Protocol takes them to the Article 14 and reasonable steps standard in 11 weeks.

