AI Governance Frameworks
PRISM is built to support compliance with the major AI governance frameworks now in force or coming into effect. Here is what each framework requires — and who it applies to.
EU AI Act
RegulatoryThe world's first comprehensive legal framework for AI, establishing binding obligations based on risk classification.
The EU AI Act categorises AI systems into four risk tiers — unacceptable, high, limited, and minimal — and imposes obligations proportionate to that risk. High-risk systems (used in areas such as healthcare, credit, employment, and law enforcement) must meet requirements around data governance, transparency, human oversight, robustness, and accuracy before they can be deployed in the EU market. Providers must register their systems, maintain technical documentation, and conduct post-market monitoring. The Act also introduces requirements around general-purpose AI models, including transparency obligations and, for the most capable models, systemic risk assessments.
Key Requirements
- Risk classification and conformity assessment
- Technical documentation and audit trails
- Human oversight mechanisms
- Post-market monitoring and incident reporting
- Registration in the EU AI database (for high-risk systems)
Who It Affects
Organisations that develop, deploy, or use AI systems within the EU, or whose AI outputs affect EU residents.
Timeline
Phased implementation from 2024 to 2027. Prohibitions applied from August 2024; high-risk obligations phased through 2026–2027.
ISO 42001
Management StandardThe first international standard for AI management systems, providing a structured framework for responsible AI governance within organisations.
ISO/IEC 42001:2023 defines requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). Modelled on the structure of ISO 27001 (information security) and ISO 9001 (quality), it gives organisations a systematic, auditable approach to managing AI risk across the full lifecycle — from development and procurement through deployment and decommissioning. Certification to ISO 42001 provides independent assurance to customers, regulators, and partners that an organisation governs its AI responsibly.
Key Requirements
- Establishment of an AI management system (AIMS)
- AI policy and organisational context definition
- Risk and impact assessment processes
- Lifecycle management of AI systems
- Internal audit and continual improvement
Who It Affects
Any organisation developing, providing, or using AI systems — regardless of sector or size.
Timeline
Published December 2023. Certification available immediately through accredited bodies.
NIST AI RMF
Risk FrameworkA voluntary but widely adopted framework from the US National Institute of Standards and Technology for managing AI risk across four core functions.
The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, is designed to help organisations incorporate trustworthiness considerations into the design, development, use, and evaluation of AI systems. It organises AI risk management into four core functions — Govern, Map, Measure, and Manage — and is designed to be flexible, measurable, and adaptable across different contexts and sectors. It is increasingly referenced by US federal agencies and regulators, and is complementary to international standards including ISO 42001.
Key Requirements
- Govern — Establish AI risk governance culture, policies, and accountability
- Map — Categorise AI risks in context and identify affected stakeholders
- Measure — Analyse, assess, and track AI risks using appropriate methods
- Manage — Prioritise and act on AI risk responses and residual risks
Who It Affects
Primarily US-focused, but widely used globally by organisations seeking a structured voluntary approach to AI risk.
Timeline
AI RMF 1.0 published January 2023. Ongoing development of sector-specific profiles.
FCA AI Update
Sector-SpecificEmerging guidance from the UK Financial Conduct Authority on the responsible use of AI in financial services, grounded in existing principles-based regulation.
The FCA has signalled that AI in financial services is subject to its existing regulatory framework — including the Consumer Duty, Senior Managers and Certification Regime (SM&CR), and operational resilience requirements — and has indicated it will continue to develop targeted guidance. Firms using AI in customer-facing decisions (credit, insurance pricing, fraud detection, advice) must be able to demonstrate that outcomes are fair, explainable, and free from harmful bias. The FCA expects boards and senior managers to maintain oversight and accountability for AI systems, and has engaged with industry through its AI Lab and data-led regulatory initiatives.
Key Requirements
- Consumer Duty compliance — fair outcomes for retail customers
- SM&CR accountability — senior manager responsibility for AI decisions
- Explainability of AI-driven decisions affecting consumers
- Operational resilience of AI systems in critical services
- Bias testing and fairness monitoring in high-impact use cases
Who It Affects
FCA-authorised firms using AI in any customer-facing or decision-making context — including banks, insurers, wealth managers, and fintechs.
Timeline
Principles-based obligations apply now. Targeted AI guidance expected to develop iteratively through 2025–2026.
Manage your compliance with PRISM
PRISM maps your AI activities to each of these frameworks, generates evidence, and keeps your governance posture audit-ready — all in one place.
Awards and Honours