EU AI Act Compliance Checklist

47-point checklist covering all requirements for high-risk AI systems under the EU AI Act.

Based on Articles 8-15 Interactive checklist
Compliance Progress 0 / 47 complete

Art. 9 Risk Management System

Establish & Maintain Risk Management

Risk identification: Identified and analyzed known and foreseeable risks
Risk estimation: Estimated risks from intended use and foreseeable misuse
Risk evaluation: Evaluated risks based on post-market monitoring data
Mitigation measures: Adopted appropriate risk mitigation measures
Residual risk: Documented acceptable residual risk levels
Testing: Conducted testing against defined metrics and thresholds

Art. 10 Data & Data Governance

Training, Validation & Testing Data

Data governance: Documented data governance and management practices
Design choices: Documented relevant design choices for data collection
Data relevance: Ensured training data is relevant and representative
Error-free data: Examined data for errors and corrected issues
Bias assessment: Assessed possible biases in training data
Data gaps: Identified and addressed data gaps or shortcomings

Art. 11 Technical Documentation

Documentation Requirements

General description: Documented intended purpose and system overview
System elements: Described all elements and development process
Monitoring capabilities: Documented monitoring and logging features
Risk management docs: Included risk management system documentation
Pre-market changes: Documented all changes made before market placement
Conformity info: Included conformity assessment information

Art. 12 Record-Keeping

Automatic Logging & Traceability

Automatic logging: System automatically records events during operation
Usage period: Logs cover the period of intended use
Traceability: Logs enable tracing of AI system operation
Log retention: Logs kept for appropriate duration (min. 6 months)
Incident recording: System logs incidents and malfunctions

Art. 13 Transparency

Transparency & User Information

User instructions: Provided clear instructions for use
Provider identity: Clearly identified provider and contact information
System characteristics: Documented capabilities and limitations
Performance levels: Specified accuracy and performance metrics
Known risks: Disclosed known risks and circumstances of risk
Human oversight info: Documented human oversight measures required

Art. 14 Human Oversight

Human Oversight Mechanisms

Oversight design: Built-in human oversight mechanisms from design phase
Understanding capabilities: Humans can understand AI capabilities and limitations
Output interpretation: Humans can correctly interpret AI outputs
Override capability: Humans can override or reverse AI decisions
Stop function: System includes ability to halt operation
Fail-closed: System defaults to safe state when oversight fails
Automation bias prevention: Measures to prevent over-reliance on AI

Art. 15 Accuracy, Robustness & Security

Technical Standards

Accuracy levels: Achieved appropriate accuracy for intended purpose
Accuracy declaration: Declared accuracy levels in documentation
Robustness: System is resilient to errors and inconsistencies
Redundancy: Implemented backup solutions where appropriate
Cybersecurity: Protected against unauthorized access and attacks
Adversarial robustness: Resilient against adversarial manipulation

General Additional Requirements

Conformity & Market Placement

Conformity assessment: Completed required conformity assessment
CE marking: Applied CE marking where required
EU database: Registered in EU AI database
Post-market monitoring: Established post-market monitoring system
Incident reporting: Process for reporting serious incidents

Get Your Automated Compliance Audit

This checklist is a starting point. Our AI-powered audit tool provides detailed gap analysis, specific recommendations, and audit-ready documentation.

Start Free Audit