March15 , 2025

EU AI Act: Guiding You Through the First Regulation

Related

The Balkans on Edge: Dodik Conviction and Rising Separatist Rhetoric

The secessionist actions of Milorad Dodik threaten Bosnia’s stability, risking EU membership, and regional peace amid growing tensions.

The Art of Deception: European Art Forgery Mafia Exposed

Italian police dismantle a €200M art forgery ring, seizing 2,100 fakes of Banksy, Warhol, and Picasso in a pan-European crackdown.

The Madrid Atocha Bombings: The Dark Truth, Twenty-One Years Later

Spain marks 21 years since the Madrid Atocha train bombings, remembering 192 victims and the attack's lasting impact on politics and security.

Italy Steps into the Light as Digital Currencies Catch On

Italy is set to launch its digital currency by 2025-2026, aiming to modernise payments, combat tax evasion, and boost financial innovation.

U.S. Wants Türkiye to Hand Back Enigmatic Jet Gear

As Washington reclaims F-35 gear from Türkiye, fears grow that America's prized fighter jets come with a hidden kill switch only the Pentagon controls.

Share

The European Union ambitiously regulates artificial intelligence as part of its broader digital strategy. Focusing on fostering innovation while safeguarding societal interests, the EU aims to create a framework ensuring the responsible development and use of AI technologies. 

Under the guidance of the EU AI Act, these innovations can potentially revolutionise industries, offering advancements such as improved healthcare, safer transport systems, efficient manufacturing, and sustainable energy solutions.

EU Regulatory for AI

In April 2021, the European Commission introduced the world’s first comprehensive AI regulatory framework. Central to this initiative is a risk-based approach to AI systems, classifying them into varying levels of risk to determine the corresponding regulatory measures.

Different Rules for Different Risks

The AI Act sets out obligations for developers and users based on the assessed risk level of their AI systems, ensuring a fair and balanced approach to regulation.

  • Minimal risk: Most AI applications fall under this category and require no stringent oversight.

  • Unacceptable risk: AI systems deemed a threat to fundamental rights and safety are banned outright. These include:

– Cognitive and behavioural manipulation of vulnerable groups (e.g., voice-activated toys encouraging harmful actions).

– Social scoring systems evaluate individuals based on socioeconomic status or behaviour.

– Real-time biometric identification systems, such as facial recognition, with limited exceptions for law enforcement in severe cases.

High-Risk AI Systems 

AI systems with significant safety or fundamental rights implications are classified as high risk. These include:

  1. AI in Regulated Products: 
  • Toys
  • Aviation
  • Automotives
  • Medical Devices
  • Lifts
  1. AI in Critical Sectors:
  • Management of critical infrastructure
  • Education and employment processes
  • Access to essential services and benefits
  • Law enforcement and border management
  • Legal interpretation and application

All high-risk AI systems will undergo a rigorous assessment before entering the market and will be continuously monitored throughout their lifecycle, providing citizens with a sense of security.

Transparency and Generative AI

Generative AI models, such as ChatGPT, fall outside the high-risk category but must adhere to strict transparency and copyright laws:

  • Disclosing AI-generated content
  • Preventing the creation of illegal content
  • Publishing summaries of copyrighted data used in training

Advanced models, like GPT-4, classified as high-impact general-purpose AI, require thorough evaluation and prompt reporting of serious incidents to the European Commission.

AI-generated or modified content, such as deep fakes, must be clearly labelled to inform users.

Supporting Innovation

The AI Act mandates the provision of testing environments to encourage innovation, especially among startups and small—to medium-sized enterprises. Such environments are designed to simulate real-world conditions for developing and refining AI models, providing a safe and controlled space for testing and experimentation.

Timeline for Implementation

The EU’s AI Act was adopted by the European Parliament in March 2024, with final approval by the Council in May 2024. The regulation’s rollout includes:

  • Immediate bans on AI systems posing unacceptable risks will be enforced within six months of the AI Act’s enactment. These bans will apply to AI systems that have been identified as posing an immediate threat to fundamental rights and safety, ensuring swift action to protect citizens.

  • Transparency rules for general-purpose AI is applicable within 12 months.

  • High-risk system compliance is required within 36 months.

The EU AI Act sets a precedent for global AI governance by balancing innovation with accountability. It offers a roadmap for harnessing technology while protecting societal values.

Keep up with the Daily Euro Times for more! 

Also read:

Romania: New Elections Amidst Interference

Housing Crisis in Spain – 100% Tax on Foreigners 

Yakuza Boss Admits Offering Iran Nuclear Material

Author

  • Blerta Kosumi

    Writer for the Daily Euro Times. Blerta brings a blend of digital marketing, SEO expertise, and content strategy to deliver impactful results. With a strong analytical approach, Blerta crafts data-driven strategies to engage audiences, boost brand visibility, and create meaningful connections.

    View all posts

Your Mirror to Europe and the Middle East.

We don’t spam! Read more in our privacy policy