What is the EU AI Act?
The EU AI Act is the first AI legislation in the world. The law formally entered into force on August 1, 2024, and will gradually introduce obligations until August 2026. The law applies to all organizations that use AI, regardless of whether they build AI themselves or use AI through third‑party software. The EU uses a risk‑based model with four categories.
The official risk categories:
- Minimal risk (no obligations)
Examples:- spam filters
- video games
- recommendation systems (in many cases)
- Limited risk — transparency obligation
Here, the user must know that they are interacting with an AI system.
Examples:- chatbots
- deepfakes
- AI systems that generate text or images (depending on application)
- General Purpose AI (GPAI)
Generative models such as LLMs (Large Language Models like GPT models) fall under these additional rules. Obligations for AI providers include:- technical documentation
- summary of training data
- cybersecurity
- model evaluations
- High risk — strict requirements from August 2026
High‑risk AI is AI that directly impacts people’s lives.
Examples according to the law:- AI for access to education & professions
- AI for recruitment, HR & personnel management
- AI in essential public services
- AI in law enforcement
- AI in medical prognosis
Obligations include:- risk management
- data governance & data quality
- technical documentation
- registrations in EU databases
- logging
- human oversight
- Unacceptable risk — banned from February 2025
Examples of prohibited systems:- social scoring
- manipulative AI that distorts behavior
- AI that exploits vulnerable groups
- emotion recognition in work/education
- remote biometric identification (with some exceptions)
Does your organization need to take action regarding the AI Act?
Yes, the law applies to every organization that uses AI, even indirectly through software vendors.
How high are the fines for non‑compliance with the AI Act?
Fines can reach up to: €35 million or 7% of global revenue.
This makes the AI Act even stricter than the GDPR.
AI Act Step‑by‑Step Plan for SMEs
Below you will find a step‑by‑step plan:
- Map all AI tools
Also include AI that is “hidden” inside software.
- Create an AI policy
Think of: what is allowed/not allowed, data usage, privacy, transparency, human oversight, security.
- Train employees in safe & responsible AI use
This aligns with transparency requirements and organizational obligations.
- Do you use high‑risk AI?
Then mandatory documentation, risk management, logging and human oversight apply from August 2026.
- Work with reliable vendors that comply with EU rules
GPAI providers such as Microsoft, Google and OpenAI have already signed voluntary EU agreements.
Conclusion
Even if you only use a little AI, you must comply with the EU AI Act. Some obligations already apply and the rest will follow by 2026 at the latest. The best preparation starts now: inventory, create policy, train employees and check whether your tools are compliant.
Note: this is advice, we are not lawyers! For legal rules you should consult a lawyer or the government.