Smart About Risk  
Rules for Artificial Intelligence, or the AI Act

Rules for Artificial Intelligence, or the AI Act

In August, it will be one year since the entry into force of the first regulation establishing rules for artificial intelligence (the AI Act, Regulation (EU) 2024/1689). This regulation lays out a set of rules for both developers and entities deploying AI systems.

AI systems are classified into four risk levels:

  • Unacceptable risk (AI systems considered a clear threat to the safety, livelihoods and rights of people),
  • High risk (AI use cases that can pose serious risks to health, safety or fundamental rights),
  • Limited risk, and
  • Minimal risk.

The obligations for AI systems vary depending on the risk category they fall into.

AI with unacceptable risk has been banned since February 2025. This includes practices such as social scoring, user behaviour manipulation, and real-time facial recognition in public spaces (with exceptions for law enforcement under strictly defined conditions). A specific example of a banned AI system is one that uses facial recognition to block access to stadiums for aggressive fans. As of February 2025, there's also a requirement to ensure a sufficient level of AI literacy among employees working with AI.

Providers of general-purpose AI models are required to prepare codes of good practice, as governance rules and obligations for these models come into effect on 2 August 2025. From that same date, sanctions for violating the AI Act will also apply.

The AI Act places the greatest focus on high-risk AI systems, which are subject to numerous obligations, including requirements before the system is placed on the market or put into operation (such as risk management, data quality, documentation, logs of AI usage, transparency, human oversight, accuracy, reliability, and cybersecurity). These requirements will apply starting 2 August 2026, alongside the full AI Act. Systems that assess individuals’ creditworthiness are an example of high-risk AI.

An exception to the AI Act, with an extended transition period until 2 August 2027, applies to high-risk AI systems embedded in regulated products. An example includes an AI system embedded in diagnostic medical devices.

In connection with the AI Act, ARM can offer you:

  • Enhancing strategies, processes, and methodologies for managing AI-related risks,
  • Establishing rules for the development of AI models,
  • Defining internal policies for AI usage,
  • Gap analyses / compliance audits with the AI Act,
  • Trainings and workshops on the AI Act,
  • Collaboration on the development/testing of specific AI models.