Introduction
In a small town in the Netherlands, a customer applied for a loan only to be declined by an algorithm without explanation. Later, she discovered that the AI had identified her address—located in a low-income neighbourhood—as a risk factor. It wasn't a crime or credit score; it was biased. For executives, this moment isn't about one person's fate—it's about the fragile thread of trust connecting brand, customer, and society. As AI becomes central to operations—from hiring to risk assessment—leaders face a stark question: Can your AI be trusted? Recent research from MIT Sloan and Harvard Business Review highlights that trust is built on transparency, fairness, and effective oversight. In a world demanding accountability, ethical AI isn't optional — it's strategic.
Can Your AI Be Trusted?
Why ethical oversight is now essential to AI deployment and your business reputation.
The ethical management of AI has a direct impact on brand trust and compliance, which are which are critical to long-term business sustainability. Ensuring AI decisions are transparent, explainable, and moral will be central to maintaining customer and stakeholder trust.
AI can enhance ESG rigour—scanning supply chains for greenwashing or bias in hiring—with greater accuracy than human auditors. Yet, governance still lags: 42% of organisations say compliance is a priority, but only 26% embed it in their data teams. And while half of governments now mandate compliance with privacy and AI regulations, leadership too often treats oversight as a checkbox—not a cultural imperative.
Opportunity and Risk: Ethical AI fosters stakeholder confidence and mitigates reputational damage. But lack of transparency can lead to biased, unexplainable decisions—or worse, regulatory penalties.
Executive takeaway: Define AI governance roles, weave transparency into design, and publish simple explainability statements for all high-impact systems.
Why explainability matters
Transparent AI fosters trust — but only if people can follow its reasoning.
The plain fact is that people trust what they understand. A recent MIT Sloan study found that 77% of experts agree that human oversight and explainability are inseparable parts of responsible AI. And users trust AI more when they know how it works—even when it may stumble. Yet too many systems are black boxes: employees rubber-stamp decisions they can't interpret, eroding accountability.
Opportunity & Risk: Explainability supports smarter decisions and deeper accountability. However, superficial dashboards may lull leaders into a false sense of confidence.
Executive takeaway: Audit AI outputs regularly; demand models that explain "why", not just "what"; and train leaders and users to ask meaningful questions about the system.
Building trust through real-world results
Solving real problems builds more trust in AI than slick marketing.
Stakeholders judge AI by its impact. Time Magazine highlights GenCast – a model developed by DeepMind to improve 15-day weather forecasts – as shining proof that AI can serve humanity ethically when deployed responsibly.
Companies that embed AI in ESG processes—such as diversity or emissions tracking—signal authenticity and gain trust.
But hype still bites. The SEC recently flagged firms for overstating AI capabilities, warning against 'AI-washing' that erodes brand credibility. Leaders must strike a balance between ambition and humility—and deliver on their promises.
Executive takeaway: Tie AI deployments to clear business and social outcomes. Abandon speculation and anchor in use cases that matter.
Governance and regulation: the new imperative
Regulatory tides are rising—governance isn't just wise; it's essential.
The EU AI Act and similar frameworks in South Korea, Canada, and the United States are organisations that aim to establish effective AI oversight, particularly in high-risk applications. As of mid‑2025, governments expect enterprises to embed AI compliance into their core operations. Shouldering governance frameworks now avoids painful misfires—and positions companies as trusted innovators.
Yet compliance isn't a strategy. It risks becoming tedium, detached from ethics and outcomes.
Executive takeaway: Embed AI oversight into board and risk committees; map high-impact use cases; audit against AI regulations annually.
Conclusion
Ethical AI isn't just safety—it's strategy. Transparency, explainability, real-world proof, and governance weave together to uphold trust. As we move into 2026, leaders must ask not only: What can AI do for us? But How can it do it honourably?If your AI falters in ethics, trust deteriorates—and so does your licence to operate.
For Leaders to Act
Appoint a senior AI ethics leader—reporting to the board.
Require explainability standards for every high-impact AI system.
Benchmark AI use against real-world outcomes and ESG goals.
Embed AI governance into risk frameworks and annual audits.
In the end, the measure of AI isn't just what it powers—but the trust it inspires in the organisation it serves.
Acronyms used in the article.
AI — Artificial Intelligence
What it means: Intelligence shown by machines—systems that analyze information, learn, and perform tasks humans typically do, like language, vision, or decision-making.
Why it matters: When leaders ask, “Can your AI be trusted?” they’re asking whether your intelligent systems act fairly, transparently, and with accountability.
What it means: A framework used to evaluate a company’s performance in three areas:
Environmental: How sustainable your operations are (e.g., carbon footprint).
Social: How you treat employees, customers, and communities.
Governance: How you’re run—things like ethics, transparency, and
Why it matters: ESG scores influence investor confidence, regulatory standing, and brand reputation. Leaders embed these criteria to manage broad risks and reflect societal expectations.
SEC — U.S. Securities and Exchange Commission
What it means: The federal agency that regulates U.S. financial markets, protects investors, and enforces securities laws.
Why it matters: The SEC now scrutinizes companies’ claims about AI, flagging false or exaggerated statements as “AI-washing”—underscoring the cost of misleading stakeholders.
URLs of references used in this article.
MIT Sloan – AI Explainability: How to Avoid Rubber‑Stamping Recommendations
https://sloanreview.mit.edu/article/ai-explainability-how-to-avoid-rubber-stamping-recommendations/MIT Sloan – In AI We Trust — Too Much?
https://sloanreview.mit.edu/article/in-ai-we-trust-too-much/SEC – Press release on enforcement against Delphia (AI washing)
https://www.sec.gov/newsroom/press-releases/2024-36Crowell & Moring – SEC Enforcement Actions Signal Enhanced Scrutiny Around “AI Washing”
https://www.crowell.com/en/insights/client-alerts/sec-enforcement-actions-signal-enhanced-scrutiny-around-ai-washingReuters – ‘AI washing’ – what lawyers need to know to stay ethical
https://www.reuters.com/legal/legalindustry/ai-washing-what-lawyers-need-know-stay-ethical-2025-02-10/Reuters – AI washing: regulatory and private actions to stop overstating claims
https://www.reuters.com/legal/legalindustry/ai-washing-regulatory-private-actions-stop-overstating-claims-2025-05-30/Wall Street Journal – SEC Head Warns Against ‘AI Washing,’ the High‑Tech Version of Greenwashing
https://www.wsj.com/articles/sec-head-warns-against-ai-washing-the-high-tech-version-of-greenwashing-6ff60da9MIT Sloan – New report documents the business benefits of 'responsible AI'
https://mitsloan.mit.edu/ideas-made-to-matter/new-report-documents-business-benefits-responsible-ai
* * *
Dr. Ivan Roche FRSS FRSA MInstP
Founder and Principal Advisor · Otopoetic Limited · Belfast

