When regulators ask whether a decision can be reconstructed, most boards discover they have no evidence. Not documentation — evidence. The complete information picture that existed at the moment the system decided. I help boards close the gap between adoption and defensibility.
NED at Unclocks & NeuraBloom · Founder, Otopoetic · 25+ years executive leadership · 50+ GLG/Guidepoint engagements
The most dangerous assumption in enterprise AI is that deployment equals governance. Boards approve AI investments, technology teams build and ship, and somewhere between the two, accountability disappears. I work in that gap — helping boards understand not just what their AI systems do, but what those systems knew when they did it, and whether the organisation can prove it under scrutiny.
I trained in astrophysics at Queen's University Belfast, which taught me to extract signal from noise in complex systems under uncertainty. I spent twenty-five years in enterprise technology leadership across insurance, aviation, financial services, and public sector — which taught me that most system failures are governance failures in disguise. I now focus on the gap every board overlooks: the ability to prove what your AI knew at the moment it decided.
Through Otopoetic Limited, I provide AI governance advisory built around two proprietary frameworks: the Governance Classification (A·E·C·R·M), which gives every organisation a precise, five-dimensional governance address, and the Digital Alibi, a forensic evidentiary standard for reconstructing AI decisions as they existed at the moment they were made — not assembled retrospectively when inquiry arrives.
I hold a Fellowship of the Royal Statistical Society and serve as Secretary of its Northern Ireland Local Group. I am a Fellow of the Royal Society of Arts, a Member of the Institute of Physics, author of four books, and am completing an academic working paper on AI governance frameworks at Lloyd's of London. I participate in the GLG and Guidepoint expert networks, with over 50 completed engagements spanning AI governance, cybersecurity, insurtech, and aviation technology.
Based in Belfast. Operating across the UK and Ireland. Writing at The Roche Review.
A governance-focused NED brings something different from a technology-strategy NED: the ability to ask the question a regulator will ask, before the regulator does. What follows is what I bring to a board mandate.
Using Otopoetic's proprietary A·E·C·R·M framework, I locate your organisation across five independent governance dimensions — Accountability, Exposure, Control, Regulation, Maturity. The result is a precise address, not a traffic light. It maps directly to regulatory ceilings and defensibility gaps. Interactive self-assessment at otopoetic.com →
A structured forensic review establishing whether your organisation can reconstruct the complete information picture behind every material AI-assisted decision — as it existed at the moment it was made. Not retrospectively. Not from memory. This is the evidentiary standard regulators apply. Learn more at otopoetic.com →
A prioritised, board-facing action plan structured around your governance address. Identifies the gaps that carry the most regulatory and fiduciary risk, with clear accountability ownership and an implementation sequence mapped to your regulatory obligations.
Ongoing constructive challenge on AI governance posture, regulatory horizon, and accountability architecture. Ensures the board can answer the questions a regulator, litigant, or shareholder will ask — before they arrive. Includes regulatory monitoring and governance address updates.
Providing independent CTO-level oversight and constructive board challenge to a growing B2B technology company. Focused on technology strategy, governance frameworks, and ensuring alignment between investor commitments and operational delivery.
Governance, technology strategy, and KPI framework design for a not-for-profit delivering art-based programmes for neurodivergent individuals and young people. Leading phased technology architecture planning and board-level strategy documentation.
Boutique strategic technology advisory and technical due diligence consultancy. Serving PE firms, institutional investors, and legal counsel with systems thinking, AI governance, and forensic technology analysis.
Part-time senior technology leadership for growth-stage companies that need strategic direction without the overhead of a full-time hire. Covering architecture decisions, team structure, vendor selection, and technology due diligence.
Contributing to a regional foresight study on quantum technologies, supporting the identification of strategic opportunities, capability gaps, and pathways for adoption across Northern Ireland's innovation ecosystem. Commissioned by Matrix NI and delivered by SAMI Consulting.
Contributing to Reed's innovation strategy, providing practitioner insight on the intersection of AI, recruitment technology, and workforce transformation from a governance and advisory perspective.
Organising the regional programme for the Royal Statistical Society, connecting statisticians, data scientists, and researchers across Northern Ireland with the national professional body.
PE firms and institutional investors use expert networks to vet advisors before engagement. The record below is what they find.
Boards need to know their advisor speaks the regulatory language that will govern AI accountability. I advise boards on the following frameworks — not from a compliance checklist perspective, but from the question a regulator or litigant will actually ask.
High-risk AI system obligations under the EU AI Act require board-level defensibility evidence — not documentation of intent. The 2 August 2026 deadline for high-risk compliance requires governance structures to be in place, not assembled afterwards.
Senior managers bear personal accountability for AI governance failures under SM&CR. The question is not whether the board signed off a policy — it is whether the senior manager can prove they understood, challenged, and owned the governance at the moment decisions were made.
DORA's third pillar requires contemporaneous decision evidence — not retrospective documentation. Organisations must demonstrate that governance structures were operational at decision time, not reconstructed in response to an incident.
NIS2 requires evidence of governance maturity and human oversight capability at system design and decision time. Organisations that can demonstrate their oversight architecture precedes deployment are in a fundamentally different position under inquiry.
Completing an academic working paper on AI governance frameworks at Lloyd's of London, including a NED Competency Framework and regulatory control matrix for the insurance market. Forthcoming on ResearchGate. Peer-reviewed by Paul McGee, co-founder of SITP London.
AI systems that process personal data in automated decision-making create specific obligations under UK GDPR and ICO guidance. Board-level accountability requires clear ownership of the data governance layer within the AI decision pipeline.
An initial conversation to understand your governance address — where you sit across the five A·E·C·R·M dimensions. Not every organisation has the same gaps. This conversation determines what matters most and what the right engagement looks like.
A structured assessment using the Governance Classification framework. Establishes your precise governance address, identifies defensibility gaps, maps regulatory obligations to your specific sector ceiling, and tests whether your AI decisions can be forensically reconstructed. Delivered as a board-ready document.
A prioritised action plan with clear accountability ownership. For NED mandates, this becomes the foundation for ongoing constructive board challenge — ensuring the board can answer the question a regulator will ask before the regulator does. Includes Digital Alibi infrastructure design for organisations that need contemporaneous decision evidence.
These are the ideas I keep returning to — in boardrooms, in advisory work, and in the writing I publish at The Roche Review. They are the questions I believe every board deploying AI should be able to answer before a regulator asks them.
Mastering AI Conversations to Lead the Future. An 11-chapter guide for leaders on the art and science of prompt engineering, covering practical exercises, case studies, ethical implications, and emerging trends in multimodal AI.
View on AmazonEmbracing the AI Revolution for a Future-Ready Workforce. A practical guide to building AI-capable teams, scaling AI across organisations, and embedding ethics at every level of workforce development.
View on AmazonBelfast's journey from Linenopolis to Tech Hub. Tracing the city's innovative spirit from Queen Victoria's 1849 visit through the Titanic era and the Good Friday Agreement to its emergence as a technology centre.
View on AmazonCork's Technological Odyssey. Charting Cork's journey from monastic settlement in 650 AD to a modern technology hub, weaving together pivotal moments in the city's history of resilience and innovation.
View on AmazonAI governance and executive risk, explored in depth. For boards that need to know more than their briefings tell them. The anchor piece, The Digital Alibi, is live now.
Read the latest issueA practitioner-focused working paper on AI governance frameworks at Lloyd's of London, including a NED Competency Framework and regulatory control matrix for the insurance market. Forthcoming on ResearchGate. Peer-reviewed by Paul McGee, co-founder of SITP London.
ResearchGate profileYour enquiry has been received. I will respond within two working days.