EU AI Act
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system for AI systems, imposing requirements ranging from transparency obligations to outright bans depending on the risk level. For hardware manufacturers building AI-enabled products, the Act has direct implications on Edge AI design, documentation, and conformity assessment.
Key Facts
| Detail | Information |
|---|---|
| Full name | Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence |
| Entered into force | 1 August 2024 |
| Prohibited AI practices | 2 February 2025 |
| GPAI obligations | 2 August 2025 |
| High-risk AI — full obligations | 2 August 2026 |
| Scope | All AI systems placed on the EU market or used in the EU, regardless of where developed |
| Penalty | Up to €35 million or 7% of global annual turnover (prohibited practices) |
Risk Categories
The AI Act classifies AI systems into four tiers:
| Risk Level | Examples | Obligations |
|---|---|---|
| Unacceptable | Social scoring, real-time biometric identification in public (with exceptions), manipulative AI | Banned — cannot be placed on the EU market |
| High-risk | Medical devices with AI, safety components in vehicles, critical infrastructure monitoring, biometric identification, employment screening | Full conformity assessment, risk management, data governance, human oversight, technical documentation |
| Limited risk | Chatbots, emotion recognition, deepfakes | Transparency obligations — users must be informed they’re interacting with AI |
| Minimal risk | AI-enabled spam filters, AI in video games, inventory management | No specific obligations (voluntary codes of conduct encouraged) |
Impact on Hardware Products
Edge AI Devices
For manufacturers building products with on-device AI inference (Edge AI), the AI Act affects:
| Requirement | What It Means for Hardware |
|---|---|
| Risk classification | The product manufacturer must classify the AI system’s risk level |
| Technical documentation | Full documentation of the AI model, training data, and performance metrics |
| Conformity assessment | High-risk AI embedded in hardware may require third-party assessment |
| Human oversight | Hardware must provide interfaces for human review/override of AI decisions |
| Accuracy logging | System must log AI decisions for audit and bias monitoring |
| Data governance | Training data must be documented, with bias mitigation measures |
| Post-market monitoring | Ongoing performance monitoring of deployed AI systems |
Edge AI as a Compliance Strategy
Processing data on-device with Edge AI can reduce regulatory burden compared to cloud AI:
| Aspect | Cloud AI | Edge AI |
|---|---|---|
| Data transfers | Data sent to third-party servers — GDPR concerns | Data stays on-device — simplified compliance |
| Third-party processing | Cloud provider is a data processor — DPA required | No third-party processing — direct control |
| Audit trail | Must audit cloud provider’s AI systems | Self-contained, auditable on-device system |
| Latency for safety | Network-dependent — unsuitable for safety-critical | Deterministic local inference — suitable for safety |
| AI Act documentation | Must document cloud-based AI processing chain | Simpler — single device, single deployment |
Strategic advantage: Edge AI on a dedicated NPU eliminates data transfer concerns, simplifies GDPR compliance, and reduces the scope of AI Act conformity assessment. This is why our edge AI designs emphasize on-device inference.
AI Act vs. Other EU Regulations
| Regulation | Focuses On | Link to AI Act |
|---|---|---|
| AI Act | AI systems (software + hardware) | The primary AI regulation |
| CRA | Product cybersecurity | AI-enabled products must comply with both AI Act and CRA |
| GDPR | Personal data protection | AI processing personal data must comply with GDPR + AI Act |
| NIS2 | Organizational security | Organizations deploying high-risk AI are NIS2 essential entities |
| Medical Device Regulation | Medical devices | AI-based medical devices are automatically high-risk under AI Act |
| Machinery Regulation | Industrial machinery | AI safety components in machinery are high-risk |
General-Purpose AI (GPAI) Models
The AI Act also regulates foundation models and GPAI (e.g., large language models):
- All GPAI: Must provide technical documentation, comply with EU copyright law, publish training data summaries.
- Systemic risk GPAI (>10²⁵ FLOPs training compute): Additional requirements for model evaluation, adversarial testing, cybersecurity, and energy consumption reporting.
Related Terms
- Edge AI — On-device AI inference that simplifies AI Act compliance through local data processing.
- CRA — The product cybersecurity regulation that applies alongside AI Act for connected AI devices.
- NIS2 — The organizational cybersecurity directive relevant for entities deploying high-risk AI systems.