Product

Solutions

Resources

Customers

Company

Product

Solutions

Resources

Customers

Company

Published on: May 26, 2025

| Updated: May 26, 2025

The EU AI Act Explained

Artificial intelligence (AI) is reshaping industries, public services, and everyday consumer experiences at an unprecedented pace. As these technologies become more integrated into critical systems and decision-making processes, the need for clear and consistent regulation has intensified. In response, the European Union has adopted the Artificial Intelligence Act (EU AI Act)—the first comprehensive legal framework of its kind designed to govern the development and use of AI. 

In this article we’ll discuss what the EU AI Act is, why it matters, requirements, and the impact it will have on businesses.

What is the EU AI Act and Why Does It Matter?

The EU AI Act is the European Union’s landmark legislation created to regulate the safe and ethical use of artificial intelligence. As AI systems increasingly power everything from healthcare tools and financial services to recruitment software and government decision-making, the Act introduces a risk-based framework to ensure these technologies are trustworthy, transparent, and aligned with fundamental rights. 

Approved in 2024 and set for full enforcement by August 2, 2026, the EU AI Act will have far-reaching implications for organizations operating within or serving customers in the EU. At its core, the AI Act establishes rules for how AI systems can be developed, deployed, and monitored regardless of location. Businesses will be required to assess the risks of their AI systems, meet stringent transparency and accountability standards, and ensure ongoing compliance with regulatory obligations.

Who Needs to Pay Attention?

The EU AI Act applies broadly to any organization that develops, deploys, or sells AI systems within the European Union—or to EU citizens—regardless of where the company is headquartered. Its extraterritorial scope means global businesses using AI technologies must ensure compliance if they operate in or serve the EU market. 

Because the regulation impacts how AI is designed, trained, implemented, and monitored, compliance is not the responsibility of a single department, it requires coordinated action across the organization. 

Compliance Requirements Under the EU AI Act

The EU AI Act categorizes AI systems into four distinct risk levels: unacceptable, high, limited, and minimal. Each category comes with its own set of legal and regulatory obligations, determining the degree of scrutiny and compliance required for the system’s development and use. 

While most AI systems will fall into the minimal or limited risk categories, the European Union estimates that approximately 5–15% of existing AI applications will be classified as high-risk. These systems will face the most stringent compliance requirements under the Act, representing a relatively small share of the market but carrying significant regulatory burdens for the affected organizations.


Risk Level

Definition

Examples

Regulatory Status

Key Compliance Requirements

Unacceptable Risk

AI systems posing a clear threat to the safety, livelihoods, or rights of individuals

  • Social scoring by governments  

  • Cognitive behavioral manipulation 

  • Real-time biometric surveillance in public spaces (limited exceptions) 

Prohibited: Banned outright from use in the EU 

  • Cannot be developed, marketed or deployed in the EU 

  • Limited exceptions may apply for law enforcement under strict safeguards

High Risk

AI systems used in critical infrastructure or decision-making processes that significantly impact individuals lives or rights 

  • Biometric ID

  • AI used in hiring, education, and credit scoring 

  • Medical devices

  • AI in law enforcement or border control systems

Tightly Regulated: Permitted only if strict requirements have been met 

  • Conformity Assessments and CE marking 

  • Risk management and mitigation procedures  

  •  Robust data governance practices 

  • Detailed technical documentation 

  • Logging and traceability

  • Human oversight mechanisms

  • Transparency to users and authorities 

Limited Risk

AI systems that interact with users or could influence user behavior, but do not directly impact rights or safety

  • AI chatbots 

  • Deepfakes (non-malicious) 

  • Recommendation engines  

Transparency Obligations:  Allowed with specific communication requirements  

  • Systems must inform users they are interacting with AI  

  • For deepfakes, clear disclosure that content is artificially generated or manipulated

Minimal Risk

AI systems with low or no impact on user’s rights or safety

  • Spam filters  

  • AI in video games

  • Predictive text input tools  

Unregulated: No mandatory requirements under the Act  

  • No legal obligations

  • Voluntary codes of conduct encouraged 

  • Developers are encouraged to follow ethical guidelines for responsible AI development  

What Makes an AI System "High-Risk"?

According to the EU AI Act, an AI system is considered high-risk if it: 

  • Is used in sensitive or safety-critical sectors such as healthcare, education, employment, law enforcement, or infrastructure. 

  • Significantly influences individuals’ legal or material standing. 

  • Is listed in Annex III of the Act (which will be updated periodically based on technological and societal developments).

What are the Requirements for a “High-Risk” AI System?

Organizations developing or using high-risk AI systems must meet the following requirements to comply with the EU AI Act: 

  1. Risk Management System (Article 9): Implement a continuous, documented process to identify, evaluate, and mitigate risks throughout the AI system’s lifecycle. 
     

  2. Data and Data Governance (Article 10): Ensure training, validation, and testing datasets are relevant, representative, free of errors, and statistically appropriate to reduce bias and ensure fairness. 
     

  3. Technical Documentation (Article 11): Maintain detailed technical documentation that demonstrates compliance with the Act and enables authorities to assess the system’s conformity. 
     

  4. Record-Keeping and Logging (Article 12): Design systems to automatically record events (logging) to ensure auditability, traceability, and incident investigation. 
     

  5. Transparency and Information to Users (Article 13): Provide clear instructions and documentation to users about how the AI system functions, its limitations, and how to use it safely and effectively. 
     

  6. Human Oversight (Article 14): Integrate mechanisms that ensure meaningful human involvement. The AI system must not override or mislead human decision-makers. 
     

  7. Accuracy, Robustness, and Cybersecurity (Article 15): Design systems to deliver accurate, reliable, and secure performance. They must withstand manipulation and continue functioning under normal and foreseeable conditions. 
     

  8. CE Marking and EU Declaration of Conformity (Article 29): Apply the CE marking to indicate conformity and submit a formal declaration that the AI system meets EU regulatory standards. 
     

  9. Conformity Assessment (Article 43): Conduct internal checks or third-party audits (depending on the system type) to certify that the AI system meets all legal requirements before it is placed on the market or put into service. 
     

  10. Post-Market Monitoring (Article 72): Implement processes to monitor performance and risks after deployment and ensure ongoing compliance throughout the system's operational life. 
     

  11. Incident Reporting (Article 73): Establish protocols for notifying national authorities of serious incidents or system malfunctions that pose risks to health, safety, or fundamental rights.

What are Other Compliance Requirements for the EU AI Act?

To meet the obligations set forth under the EU AI Act organizations must implement a range of technical and procedural controls. These requirements are designed to ensure AI systems are safe, transparent, and aligned with fundamental rights throughout their lifecycle. 

  • Risk management processes must extend their operations from AI development through its entire lifecycle, along with software updates and re-deployment stages.

  • Data governance practices through which organizations validate that training data contains accurate, unbiased information, which also maintains its relevance. Users need to get alerts when their activity involves AI systems, according to transparency measures. 

  • Systems should contain human intervention mechanisms that permit meaningful contact and manual decision takeovers from automated systems. 

  • Comprehensive logging and traceability, enabling audits and accountability. 

  • The CE marking demonstrates EU compliance through its status as a conformity assessment symbol.  Achieving these standards will require a cross-functional effort—legal, compliance, product, and technical teams must align culturally and operationally to integrate these safeguards into their workflows.

How Can Businesses Prepare for the EU AI Act?

The preparation for the EU AI Act constitutes a strategic, multifaceted journey instead of a singular project. Below are the steps organizations can take in preparation of the requirements:

Step 1: Audit Your AI Ecosystem 

Begin by identifying all AI and automated systems in use across your organization both internal (e.g., employee monitoring, decision-making tools) and external (e.g., customer-facing systems). This inventory forms the baseline for evaluating compliance gaps and areas for improvement. 

Step 2: Classify Risk Levels 

Apply the EU AI Act’s framework to categorize systems as high, limited, or minimal risk. Risk classification guides your compliance priorities. 

Step 3: Identify Compliance Gaps 

Review data practices, model training, and user interactions. Document non-compliant areas for policy or technical updates in alignment with the EU AI Act.  

Step 4: Strengthen Internal Governance 

Assign clear responsibilities across legal, IT, data science, and other teams that work directly with AI functions. This shared responsibility helps foster collaboration across the organization. 
 
Key stakeholders who must be engaged include: 

  • Compliance, Risk, and GRC Teams: Must interpret legal requirements and oversee reporting, documentation, and internal audits. 

  • AI Developers and Product Managers: Responsible for integrating compliance requirements into AI system architecture, functionality, and lifecycle. 

  • Legal and Policy Teams: Need to stay up to date on evolving EU guidelines and ensure the organization’s AI systems meet jurisdictional requirements. 

  • Chief Technology Officers (CTOs): Must oversee system-level compliance, including secure data integration and transparency protocols. 

  • Executive Leadership: Ultimately accountable for funding compliance initiatives and establishing organization-wide responsibility for risk and ethics. As the enforcement timeline approaches, regulators are expected to increase scrutiny, making early preparation and cross-functional collaboration essential. 

Step 5: Leverage Technology 

Using the right technology can make AI compliance much easier. Tools that automate documentation, audits, and updates help reduce manual work and avoid errors. GRC tools, like StandardFusion, are especially valuable as they bring everything together in one place and help teams stay on top of AI-related risks and requirements. 

Step 6: Educate and Train Teams 

Educate developers, legal teams, and leadership on AI obligations. Ongoing training at regular intervals builds a compliance-first culture and reduces future risk.

Final Thoughts

The EU AI Act marks a major shift in how technology is governed, placing equal weight on innovation, safety, ethics, and transparency. It offers businesses a structured path to develop trustworthy and responsible AI systems without stifling progress. Forward-thinking organizations can get ahead by auditing their current AI systems, assessing potential risks, and building compliance processes that align with the Act.