Skip to content
Inovasense

EU AI Act

EU AI Act — First comprehensive AI legal framework establishing risk-based rules for AI systems on the EU market, with edge AI hardware implications.

EU AI Act

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system for AI systems, imposing requirements ranging from transparency obligations to outright bans depending on the risk level. For hardware manufacturers building AI-enabled products, the Act has direct implications on Edge AI design, documentation, and conformity assessment.

Key Facts

DetailInformation
Full nameRegulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence
Entered into force1 August 2024
Prohibited AI practices2 February 2025
GPAI obligations2 August 2025
High-risk AI — full obligations2 August 2026
ScopeAll AI systems placed on the EU market or used in the EU, regardless of where developed
PenaltyUp to €35 million or 7% of global annual turnover (prohibited practices)

Risk Categories

The AI Act classifies AI systems into four tiers:

Risk LevelExamplesObligations
UnacceptableSocial scoring, real-time biometric identification in public (with exceptions), manipulative AIBanned — cannot be placed on the EU market
High-riskMedical devices with AI, safety components in vehicles, critical infrastructure monitoring, biometric identification, employment screeningFull conformity assessment, risk management, data governance, human oversight, technical documentation
Limited riskChatbots, emotion recognition, deepfakesTransparency obligations — users must be informed they’re interacting with AI
Minimal riskAI-enabled spam filters, AI in video games, inventory managementNo specific obligations (voluntary codes of conduct encouraged)

Impact on Hardware Products

Edge AI Devices

For manufacturers building products with on-device AI inference (Edge AI), the AI Act affects:

RequirementWhat It Means for Hardware
Risk classificationThe product manufacturer must classify the AI system’s risk level
Technical documentationFull documentation of the AI model, training data, and performance metrics
Conformity assessmentHigh-risk AI embedded in hardware may require third-party assessment
Human oversightHardware must provide interfaces for human review/override of AI decisions
Accuracy loggingSystem must log AI decisions for audit and bias monitoring
Data governanceTraining data must be documented, with bias mitigation measures
Post-market monitoringOngoing performance monitoring of deployed AI systems

Edge AI as a Compliance Strategy

Processing data on-device with Edge AI can reduce regulatory burden compared to cloud AI:

AspectCloud AIEdge AI
Data transfersData sent to third-party servers — GDPR concernsData stays on-device — simplified compliance
Third-party processingCloud provider is a data processor — DPA requiredNo third-party processing — direct control
Audit trailMust audit cloud provider’s AI systemsSelf-contained, auditable on-device system
Latency for safetyNetwork-dependent — unsuitable for safety-criticalDeterministic local inference — suitable for safety
AI Act documentationMust document cloud-based AI processing chainSimpler — single device, single deployment

Strategic advantage: Edge AI on a dedicated NPU eliminates data transfer concerns, simplifies GDPR compliance, and reduces the scope of AI Act conformity assessment. This is why our edge AI designs emphasize on-device inference.

AI Act vs. Other EU Regulations

RegulationFocuses OnLink to AI Act
AI ActAI systems (software + hardware)The primary AI regulation
CRAProduct cybersecurityAI-enabled products must comply with both AI Act and CRA
GDPRPersonal data protectionAI processing personal data must comply with GDPR + AI Act
NIS2Organizational securityOrganizations deploying high-risk AI are NIS2 essential entities
Medical Device RegulationMedical devicesAI-based medical devices are automatically high-risk under AI Act
Machinery RegulationIndustrial machineryAI safety components in machinery are high-risk

General-Purpose AI (GPAI) Models

The AI Act also regulates foundation models and GPAI (e.g., large language models):

  • All GPAI: Must provide technical documentation, comply with EU copyright law, publish training data summaries.
  • Systemic risk GPAI (>10²⁵ FLOPs training compute): Additional requirements for model evaluation, adversarial testing, cybersecurity, and energy consumption reporting.
  • Edge AI — On-device AI inference that simplifies AI Act compliance through local data processing.
  • CRA — The product cybersecurity regulation that applies alongside AI Act for connected AI devices.
  • NIS2 — The organizational cybersecurity directive relevant for entities deploying high-risk AI systems.