The 5 Questions Your Board Will Ask About AI — and How to Answer Them
Your legal team isn't blocking AI because they don't understand it. They're blocking it because your technical team can't answer five specific questions.
Boards in regulated industries are not anti-AI. They are anti-risk that they can't see, measure, or defend.
The pattern repeats across financial services, insurance, and healthcare: an AI initiative gets approved, runs for six months, then hits a wall when legal, risk, or the board asks questions that the technical team can't answer cleanly.
Here are the five questions. More importantly, here are the answers you need to have ready.
1. "Where is our data going?"
This is not paranoia. In FCA-regulated firms and NHS environments, data residency is a hard requirement. Every managed AI API endpoint is a potential data sovereignty issue.
The answer you need: a clear data flow diagram showing every point where data touches an AI model, which models are managed vs self-hosted, and what contractual protections exist for any data leaving your estate.
If you can't draw that diagram today, you have a governance gap.
2. "What happens when it's wrong?"
Hallucination is a known failure mode of LLM systems. Boards understand this — what they need to know is how you detect it, how you contain it, and who is accountable when it happens.
The answer you need: an explicit hallucination mitigation framework. This means output validation, confidence scoring, human-in-the-loop escalation for low-confidence decisions, and an audit trail that shows every AI output is traceable to source material.
3. "Who is responsible for AI decisions?"
Under the UK AI Act and FCA guidance, accountability cannot be delegated to an algorithm. A named individual must own each AI system in production.
The answer you need: an AI accountability register. Every production AI system documented with: what it does, who owns it, what it can and cannot decide autonomously, and how decisions are reviewed.
4. "What is our regulatory exposure?"
The UK AI Act, EU AI Act, and emerging FCA AI guidance create real obligations for high-risk AI systems. Most technical teams have not done a formal mapping of their AI systems against these frameworks.
The answer you need: a risk tier classification for each AI system (prohibited, high-risk, limited risk, minimal risk) and a documented compliance position for each high-risk system.
5. "What does it cost and is it worth it?"
This is often the last question asked but the first one that should be answered. Boards want to see a cost model — not just the upside case but the realistic range, including infrastructure, governance, and ongoing oversight costs.
The answer you need: a total cost of AI ownership model, not just the token bill. Include the hidden costs: human review time, compliance overhead, model refresh cycles, monitoring infrastructure.
The practical implication
These five questions define the minimum governance posture for regulated AI. If you can answer all five cleanly, you are in a strong position. If you can answer two or three, you have specific gaps to close. If you can't answer any of them, you have a programme that is running ahead of its governance.
None of this is technically complex. It is documentation, process, and accountability design. The hard part is making the time to do it before the board asks.
Dealing with this in your organisation?
Book a 30-minute call. No pitch — a direct conversation about your specific situation.
Book a Discovery Call