Skip to content

EU AI Act: What It Means for Companies Building with AI

· 6 min read · aims.consulting

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI regulation - and it’s not coming, it’s here. Prohibited practices are already enforceable since February 2025, GPAI rules kicked in August 2025, and the big one - high-risk AI system requirements - lands in August 2026. If you’re building with AI, deploying AI, or even just using third-party AI tools, this regulation applies to you. The good news? Most of your AI systems probably aren’t in the scary category. Let’s break down what actually matters.

What does the EU AI Act regulate?

The regulation takes a risk-based approach. Instead of treating all AI the same, it classifies AI systems into four risk tiers - each with different obligations. Think of it as a proportional framework: the higher the risk your AI poses to people’s health, safety, or fundamental rights, the more you need to do.

This is actually sensible design. Your internal meeting summarizer doesn’t need the same scrutiny as an AI system deciding who gets a loan. The EU AI Act recognizes that, and so should your compliance strategy.

How does risk classification work?

Here’s the breakdown:

  • Unacceptable risk (banned) - Social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), manipulation techniques targeting vulnerable groups. If you’re doing any of these, you have bigger problems than compliance.

  • High risk - AI systems listed in Annex III of the regulation: biometric identification, critical infrastructure management, education and employment decisions, credit scoring, law enforcement tools, migration and border control systems. These carry the heaviest requirements.

  • Limited risk - Systems that interact directly with people, like chatbots and AI-generated content. The main obligation here is transparency: users need to know they’re interacting with AI.

  • Minimal risk - Everything else. Spam filters, AI-powered search, recommendation engines, most internal tools. No mandatory requirements, though voluntary codes of conduct are encouraged.

The honest assessment? If you’re a typical tech company, most of your AI systems fall under minimal or limited risk. That’s not a reason to ignore the regulation - it’s a reason not to panic.

What does high-risk actually require?

If any of your AI systems do land in the high-risk category, the requirements are substantial. You’ll need:

  • Risk management system - A documented, ongoing process for identifying and mitigating risks throughout the AI system’s lifecycle. Not a one-time assessment - a living system.

  • Data governance - Training, validation, and testing datasets must meet quality criteria. You need to address biases and ensure data is representative and relevant.

  • Technical documentation - Detailed records of how the system works, what it was designed to do, and how it was tested. Think of it as a comprehensive system dossier.

  • Human oversight - Mechanisms that allow humans to understand, monitor, and intervene in the AI system’s operation. The level of oversight should match the level of risk.

  • Accuracy, robustness, and cybersecurity - The system must perform consistently and be resilient against errors, faults, and adversarial attacks.

If this sounds familiar, it should. These requirements overlap significantly with ISO 42001 - the international standard for AI management systems. Organizations that pursue ISO 42001 certification are already building the infrastructure that satisfies many of these requirements.

What are the key deadlines?

The EU AI Act entered into force in August 2024, but enforcement is phased:

  • February 2025 - Prohibited AI practices are banned (already in effect)
  • August 2025 - General-purpose AI (GPAI) model rules apply (already in effect)
  • August 2026 - High-risk AI system requirements become enforceable
  • August 2027 - Obligations for high-risk AI systems in Annex I (embedded in regulated products)

The clock is ticking for high-risk systems. August 2026 sounds far away, but building a compliant risk management system, documenting your AI systems properly, and establishing data governance takes time. If you haven’t started, now is the time.

What are the penalties?

The regulation has teeth:

  • Up to 35 million EUR or 7% of global annual turnover for violations involving prohibited AI practices
  • Up to 15 million EUR or 3% of turnover for other non-compliance
  • Up to 7.5 million EUR or 1% of turnover for supplying incorrect information

For context, GDPR’s maximum is 20 million EUR or 4% of turnover. The EU AI Act goes further. And if GDPR enforcement taught us anything, it’s that the EU means business.

What should you do right now?

Here’s a practical starting point - three steps that apply regardless of your company’s size:

  1. Build an AI system inventory - Map every AI system in your organization, including third-party tools and APIs. You’d be surprised how many AI systems fly under the radar. That chatbot your marketing team set up? The AI feature in your CRM? They count.

  2. Classify your risk levels - For each system, determine which risk category it falls into under the EU AI Act. Be honest but don’t over-classify. Most internal tools and customer-facing chatbots are limited or minimal risk.

  3. Identify gaps and prioritize - For any high-risk systems, compare your current practices against the regulation’s requirements. Focus on the gaps that will take the longest to close: risk management systems, documentation, and data governance processes.

This isn’t about checking boxes. An AI system inventory and risk classification done right becomes a strategic asset - it gives you visibility into how AI flows through your organization and where the real risks (and opportunities) live.

How does ISO 42001 fit in?

ISO 42001 and the EU AI Act aren’t the same thing, but they’re deeply complementary. ISO 42001 provides the management system framework - the policies, processes, and controls - that organizations need to govern AI responsibly. The EU AI Act defines the legal requirements.

If you build an AI management system aligned with ISO 42001, you’re already covering significant ground on EU AI Act compliance: risk management, data governance, documentation, human oversight, and continuous improvement. Pursuing certification doesn’t guarantee compliance, but it demonstrates a systematic approach that regulators respect.

We’ve seen this pattern before with ISO 27001 and GDPR. Organizations with mature information security management systems had a much easier time meeting GDPR requirements. The same dynamic applies here.

How we can help

We’ve spent a decade helping organizations navigate EU regulatory frameworks - from GDPR and ISO 27001 to the newer world of AI governance. The EU AI Act is complex, but compliance doesn’t have to be overwhelming.

We’ll help you build an AI system inventory, classify your risk levels honestly, and create a compliance roadmap that aligns with the regulation’s phased timeline. If you’re also considering ISO 42001 certification, we’ll design a unified approach that covers both without duplicating effort.

And if after the assessment your honest answer is “most of our systems are minimal risk” - we’ll tell you that too. No scope inflation, no manufactured urgency.

Let’s talk - we’ll help you figure out exactly where your AI systems stand.