What does the EU AI Act mean for your business?

24 March 2026

AI is everywhere nowadays: from chatbots on websites to systems that assess credit applications. AI is also often embedded in marketing software, CRM systems, conversation analysis platforms and dynamic pricing modules, sometimes without you even noticing. To ensure that AI is used safely, fairly and transparently, the European Union has introduced the EU AI Act. But what does this law actually mean for your SME?

What does the EU AI Act mean for your business?

What is the EU AI Act?

The EU AI Act is the first AI legislation in the world. The law formally entered into force on August 1, 2024, and will gradually introduce obligations until August 2026. The law applies to all organizations that use AI, regardless of whether they build AI themselves or use AI through third‑party software. The EU uses a risk‑based model with four categories.

The official risk categories:

  1. Minimal risk (no obligations)
    Examples:
    • spam filters
    • video games
    • recommendation systems (in many cases)
  2. Limited risk — transparency obligation
    Here, the user must know that they are interacting with an AI system.
    Examples:
    • chatbots
    • deepfakes
    • AI systems that generate text or images (depending on application)
  3. General Purpose AI (GPAI)
    Generative models such as LLMs (Large Language Models like GPT models) fall under these additional rules. Obligations for AI providers include:
    • technical documentation
    • summary of training data
    • cybersecurity
    • model evaluations
  4. High risk — strict requirements from August 2026
    High‑risk AI is AI that directly impacts people’s lives.
    Examples according to the law:
    • AI for access to education & professions
    • AI for recruitment, HR & personnel management
    • AI in essential public services
    • AI in law enforcement
    • AI in medical prognosis

    Obligations include:
    • risk management
    • data governance & data quality
    • technical documentation
    • registrations in EU databases
    • logging
    • human oversight
  5. Unacceptable risk — banned from February 2025
    Examples of prohibited systems:
    • social scoring
    • manipulative AI that distorts behavior
    • AI that exploits vulnerable groups
    • emotion recognition in work/education
    • remote biometric identification (with some exceptions)

 

Does your organization need to take action regarding the AI Act?

Yes, the law applies to every organization that uses AI, even indirectly through software vendors.

 

How high are the fines for non‑compliance with the AI Act?

Fines can reach up to: €35 million or 7% of global revenue.
This makes the AI Act even stricter than the GDPR.

 

AI Act Step‑by‑Step Plan for SMEs

Below you will find a step‑by‑step plan:

  1. Map all AI tools
    Also include AI that is “hidden” inside software.
  1. Create an AI policy
    Think of: what is allowed/not allowed, data usage, privacy, transparency, human oversight, security.
  1. Train employees in safe & responsible AI use
    This aligns with transparency requirements and organizational obligations.
  1. Do you use high‑risk AI?
    Then mandatory documentation, risk management, logging and human oversight apply from August 2026.
  1. Work with reliable vendors that comply with EU rules
    GPAI providers such as Microsoft, Google and OpenAI have already signed voluntary EU agreements.

 

Conclusion

Even if you only use a little AI, you must comply with the EU AI Act. Some obligations already apply and the rest will follow by 2026 at the latest. The best preparation starts now: inventory, create policy, train employees and check whether your tools are compliant.

Note: this is advice, we are not lawyers! For legal rules you should consult a lawyer or the government.

Share page: