A bank CTO walks out of an audit committee meeting satisfied. The chair had asked the predictable question: "Can you explain what your lending AI decided on the Morrison application?"
He said yes. He walked through the explainability report. The committee was satisfied. Problem solved.
Except it wasn't.
Because the audit committee asked the wrong question. And the CTO answered it correctly, which meant nobody noticed.
The question should have been: "Show me the complete information picture at the moment your system formed its view of that application. Show me the data versions, the sources it consulted, the outputs it ranked, the retrieval gaps it encountered, who in your human oversight layer saw what, when they saw it, what they decided, and prove to me that you can reproduce that entire state on demand."
Most CTOs cannot answer that. Not because they are incompetent. But because their organisations never asked them to build the infrastructure that would allow them to.
This is the distinction nobody names clearly enough: Transparency is not observability.
Transparency says: "Here is what the system did."
Observability says: "Here is why it did it, and here is the complete evidence that I can demonstrate that reasoning on demand, for any decision, at any time, to any regulator or board member."
It is the difference between compliance and governance. And in the next twelve weeks, it will become the difference between organisations that are ready for EU AI Act enforcement and organisations that are about to face unexpected regulatory enforcement action.
Why this matters.
The EU AI Act enforcement deadline is 2 August 2026. That is eighty days away.
The Act's Articles 13 and 29 require organisations deploying high-risk AI systems to maintain records of the decision-making process, the sources consulted, the reasoning behind the model's outputs, and the human oversight that occurred. Not in general terms. Specifically. Forensically. In a way that allows a regulator to audit the complete state of the system at the moment a decision was made.
Most organisations cannot do this.
Not because they lack logging systems. Logging is easy. But because logging is passive. Logging records what happened. It does not record why it happened in a way that allows reconstruction. And reconstruction, the ability to reproduce the exact state of the system at a historical decision point, is what regulators care about.
This is what I call the retrievability test. And it has four elements.
Work should fit into your life, not the other way around.
Element 1: Information Provenance
Where did the data come from? What was the source of truth at the moment of decision?
Most organisations know their current data sources. Few can specify, with forensic precision, which database tables, which schema versions, and which timestamps were active when a decision was formed three months ago.
Yet that is what regulators will ask for.
When the audit committee chair asks, "What data did your system consult on the Morrison application?", the correct answer is not "Our customer database." The correct answer is: "Version 4.2 of the customer schema, accessed from the primary production instance on 2024-11-15 at 14:23:17 UTC. The tables consulted were: [list]. The fields returned were: [list]. Here are the row-level data snapshots the system accessed."
If you cannot produce that, you have failed the first element.
Element 2: Retrieval Log
What did the system search for? More importantly, what did it fail to find?
Most organisations log the queries their systems run. Almost none log the absence of results. They do not record: "The system searched for the applicant's payment history and found nothing because there was no matching record."
Yet that gap is often more important than what the system found.
If a mortgage AI was supposed to consult historical payment data and that data did not exist, and the system proceeded without it, and the loan defaulted eighteen months later, a regulator will want to know: Did your organisation know that data was missing? Did your human oversight layer understand that the system was forming its view without complete information?
The retrievability guarantee requires that you can answer both questions
with documentary evidence.
Element 3: Human Observability Record
Who reviewed what? When? What was their documented decision?
This is where most governance frameworks collapse.
Organisations log the AI's recommendation. They may log whether a human approved it or rejected it. But they almost never log the human's reasoning for their decision. They do not document whether the human actually understood the AI's logic. They do not record what alternative actions the human considered.
This is catastrophic when a regulator asks: "Walk me through how your human oversight layer would have caught an error in this decision."
If you cannot produce documentary evidence that a human reviewed the system's reasoning, not just rubber-stamped its output, but actually interrogated it, you have failed element three.
Element 4: Retrievability Guarantee
Can you reproduce all three elements above, on demand, for any past decision?
This is the terminal test. Either you can or you cannot.
If you can reproduce the information provenance, the retrieval log, and the human observability record for any material decision in your system, you have achieved retrievability. You have the governance infrastructure that the EU AI Act requires.
If you cannot… if the discovery of even one missing element would require a forensic investigation into your systems, your logs, and your human decision records
…you do not have it.
The moment you run the test.
Here is what happens when an organisation takes the retrievability test seriously.
Phase one: The CTO is asked to identify five material AI-assisted decisions from the past six months. These are decisions with regulatory significance, reputational risk, or financial consequence.
Phase two: For each decision, the organisation is asked to produce the complete information picture. The data versions. The retrieval logs. The human oversight record. All on demand. All documented.
Phase three: The organisation is asked to prove that it can reproduce the exact state of the system at the moment each decision was formed. Not approximately. Exactly.
Organisations that pass this test have governance. Organisations that fail it discover, usually for the first time, that they have documentation and hope, but not architecture.Remote work and travel are both about freedom, if you plan wisely, you can have both.
Why now.
The EU AI Act enforcement deadline is here. Regulators will audit this. Not in theory. In practice. Starting in August.
Organisations that have built their governance architecture for retrievability will pass. They will have the documentation, the structure, the human oversight records, and the proof that they can reproduce the information picture.
Organisations that built for transparency, that invested in explainability tools and logging systems and audit trails, will discover that these things are necessary but not sufficient. They will not have the forensic precision that regulators demand.
What comes next.
If your organisation can pass the retrievability test, you have the foundation. You have demonstrated that the governance infrastructure exists, that decisions can be audited, and that your oversight layer operates with documentary precision.
From there, the work is refinement, not redesign.
If your organisation fails the test, if any of the four elements is absent or incomplete, you have identified the exact gap that needs remediation. You know what is missing. You know what to build.
The organisations that act in May and June will be the ones that approach August with confidence.
The ones that wait will spend September explaining to regulators why they cannot answer questions they should have been able to answer six months earlier.
Run the test now. Find the gap. Fix it. That is the path to genuine governance readiness.
What comes next.
If your organisation can pass the retrievability test, you have the foundation. You have demonstrated that the governance infrastructure exists, that decisions can be audited, and that your oversight layer operates with documentary precision.
From there, the work is refinement, not redesign.
If your organisation fails the test — if any of the four elements is absent or incomplete — you have identified the exact gap that needs remediation. You know what is missing. You know what to build.
The organisations that act in May and June will be the ones that approach August with confidence.
The ones that wait will spend September explaining to regulators why they cannot answer questions they should have been able to answer six months earlier.
Run the test now. Find the gap. Fix it. That is the path to genuine governance readiness.
* * *
Dr. Ivan Roche FRSS FRSA MInstP
Founder and Principal Advisor · Otopoetic Limited · Belfast

