The EU AI Act has been unanimously approved by member states and in general will be fully applicable in two years. This groundbreaking legislation is the first globally to establish rules for artificial intelligence and its use – much like GDPR set the bar for data privacy regulation.

Any company using AI within the European Union will be impacted by the EU AI Act. And like GDPR, the penalties for noncompliance are stiff. The worst offenders will face fines up to €35 million or 7% of annual global turnover. The overall purpose of the EU AI Act is to impose ethical standards and human oversight around the use of AI to protect citizens from potential dangers. To be sure, AI can solve many challenges with little risk. Certain AI applications, however, have the potential to cause significant damage to human rights and greater society – and this is where the new rules focus.

Many organizations have already taken steps on their own to ensure responsible usage and may need only relatively minor changes to demonstrate compliance. If you have not considered AI risks or developed a plan for how AI is used, however, you will need to evaluate your position and determine what is needed to comply. AI
AI Risk and What It Means

Risk and What It Means

The EU AI Act outright prohibits some uses of AI. And when it is allowed, usage must be responsible, upfront, and transparent. The rules categorize AI risks according to the potential for harm. There are four main categories:

Prohibited risks. This is the most severe category. AI systems that are a clear threat to safety, fundamental human rights, or a person’s livelihood are unacceptable. You can’t use biometric data, for instance, to track a person’s race, sexual orientation, beliefs, or union membership. You cannot create a database by scraping facial images off the internet or CCTV. Social scoring is also prohibited.

  • What to do: Determine if you are using AI in a prohibited way.
  • When: You have six months after entry to force to comply.

High risk. This category is highly regulated and includes AI systems used in critical infrastructures that could endanger the health of citizens (like public transportation), safety components of products (like AI-assisted surgery), educational training that could determine the course of someone’s life (like test scoring), employment (like resume screening), and essential services (like loan scoring).

  • What to do: These systems must meet strict requirements before they can go on the market. They must have defined processes to assess and mitigate risks, for data governance and training, and to document compliance. The systems also must have appropriate human oversight and robust cybersecurity.
  • When: You have 24 months after entry to force to comply.

A Practical Look at High-Risk AI Systems

If you develop what the regulation deems a high-risk AI system, you must go through specific steps to gain approval before you can put the technology on the market.

  1. Assess the system to ensure compliance with AI requirements and notify appropriate governing body.
  2. Register the AI system in an EU database.
  3. Sign a declaration of conformity and identify the system as approved.

Once the system is on the market, you must report serious incidents and any malfunctions and ensure human oversight. Authorities will maintain market oversight.

Limited risk. This category is lightly regulated and primarily refers to risks associated with a lack of transparency with AI use. People must be informed, for example, when AI is used to power chatbots so they can decide whether to continue or end the interaction. Organizations also must identify AI-generated content as such. This labeling applies to text, as well as audio and video content.

  • What to do: Determine how you are using AI and what content/interactions need to be labeled.
  • When: You have 36 months after entry to force to comply.

Minimal or no risk. This category includes general AI use that has no predetermined purpose. The majority of today’s AI systems fall into this category.

  • What to do: While usage is not restricted, the EU AI Act does require self-assessment and mitigation of systemic risks, technical documentation, instructions for use, copyright compliance, and human oversight of systems like chatbots. All providers of these models must perform adversarial testing, report serious incidents, and ensure proper cybersecurity measures are in place. Minimal risk systems like games and spam filters can be freely used.
  • When: You have 12 months after entry to force to comply.

How to Get Started

With deadlines looming, take time now to understand your legal obligations and course of action.

1. Conduct a gap analysis to determine what changes are necessary for your data-governance structures, policies, risk assessment processes, etc. to achieve compliance and provide appropriate documentation to regulators.

2. Operationalize your processes to instill the required steps and ensure alignment across the organization. The requirements should be interwoven into existing corporate compliance workflows. And there should be periodic checks to reassess risks in case the category changes.

3. Assign responsibilities for risk assessment, tracking, metrics, oversight, and compliance to the board, C-suite, managers, etc. according to your culture and existing practices. Think about who gets notified when there’s an issue.

4. Invest in AI literacy by training employees on AI ethics and the specifics of the EU AI Act. Data scientists and engineers will likely require special training and development for their roles.

The Road Ahead. The EU AI Act: What You Need to Know Now .

The Road Ahead

Artificial intelligence – and generative AI in particular – is evolving at breakneck speed, and the EU AI Act is constructed to adapt to future technological change. One constant, however, is that AI applications should remain trustworthy, and providers must continue to assess and mitigate the risks throughout the application’s lifecycle.

Boards and the C-suite are ultimately responsible for protecting the organization from risk, including regulatory and reputational. And in a perfect world, they would view the EU AI Act as a starting point to strengthen the brand’s trustworthiness. Practically speaking, however, achieving that ideal might be difficult. Competing priorities and limited resources may limit actions to simply doing what is necessary to check the compliance box.

Whichever path you take, leaders must be involved in assessing the risks and determining the proper course of action given the severity of fines for noncompliance. Technology tools can help by providing a consistent way to assess risks, monitor actions, track metrics, and collaborate across functions. But this act is not a set-it-and-forget-it regulation. Even something as simple as supplying incomplete or misleading information can trigger a fine of €7 million or 1% of global annual turnover.

AI offers exciting possibilities, and companies can and should continue to innovate. The EU AI Act ensures that members of society – individually and collectively – can enjoy the benefits of current and future AI capabilities without fear of the dark side.

For more on efficient corporate compliance, download our ebook, Transforming Compliance from Check-the-Box to Champion, and check out Riskonnect’s Compliance software solution.