Gartner: Navigating the EU AI Act and the future of AI regulation

Gabriele Rigon, Senior Principal Analyst in the Enterprise Operations team at Gartner writes exclusively for NODE Magazine.

The European Union (EU) has implemented one of the world’s most comprehensive regulatory frameworks for AI. The EU AI Act will reshape the development, deployment, and governance of AI across industries, with far-reaching implications beyond European borders.

Much like the General Data Protection Regulation (GDPR) before it, this regulation will affect global organisations that engage with EU markets, requiring them to meet stringent compliance standards or face severe penalties. The implications for AI providers and deployers of AI solutions, but also importers and distributors, and for the technology industry as a whole, are profound. This legislative shift implies both challenges and opportunities, but, in essence, it offers a framework of constraints that can help to boost sustainable and safe AI innovation.

For businesses and IT leaders involved in AI, understanding the key aspects of this regulation is crucial to succeed in an increasingly regulated environment.

1. A risk-based approach to AI governance

The EU AI Act takes a risk-based approach to regulating AI systems, classifying them into three main categories based on the intensity and the scope of the risk such systems entail: unacceptable AI practices, high risk AI systems, and minimal risk AI systems. This framework ensures that the most harmful applications of AI are outright prohibited, while higher-risk systems face strict controls.

Unacceptable-risk AI systems are those that are deemed to pose a clear threat to fundamental rights and freedoms of people or specific individuals. Examples include AI systems for social scoring or mass surveillance, which the EU seeks to ban due to the potential harm they can inflict on privacy and civil liberties. Biometric systems, such as those used for remote identification in public spaces, also face significant restrictions, with few exceptions reserved for law enforcement.

High-risk AI systems, on the other hand, are those with the potential to impact safety or human rights but can still be used under stringent conditions. Such systems will need to meet compliance standards covering transparency, documentation, user information, and human oversight. This category includes AI applications used in critical sectors like healthcare, transportation, and law enforcement, where the consequences of failure or misuse can be severe. For example, AI tools used in medical diagnostics or autonomous vehicles will face detailed reporting and operational requirements to ensure their reliability and safety.

Finally, minimal-risk AI systems, which comprise the majority of AI applications, will not be subject to strict regulatory oversight. However, the EU encourages companies developing these systems to follow voluntary codes of conduct to ensure responsible AI usage, particularly regarding transparency. Spam filters or grammar and spelling correctors may be examples of minimal risk AI systems.

In addition to those risk categories, for general-purpose models, GPAI models, (which are the techniques enabling Generative AI) specific transparency obligations are imposed, which relate to technical documentation and some degree of visibility on the data sources used for training the models. Systemic risks are also identified for the largest GPAIs. Additional mandatory requirements will apply to such models, for example, in relation to model evaluation and systemic risk assessment.

2. Global reach and penalties

One of the defining features of the EU AI Act is its extraterritorial reach. Like the GDPR, the AI Act applies not only to businesses operating within the EU but also to any organisation, anywhere in the world, that offers AI products or services to EU consumers or whose AI systems impact EU citizens. This means that AI developers and providers outside the EU will still be subject to its requirements if they want to access the European market.

The penalties for non-compliance are significant. Organisations that fail to meet the standards set out by the Act could face fines of up to 7% of their global annual turnover. For large multinational corporations, this could amount to billions of euros. For small and medium-sized enterprises (SMEs), the penalties will be proportionate to their size but still pose a substantial financial risk. Importantly, companies could be penalised for both violations of the AI Act and other relevant regulations, such as the GDPR, compounding the potential costs of non-compliance.

3. Preparing for the future of AI regulation

While the AI Act may initially be perceived as a barrier, especially for European AI companies, it also presents opportunities. Complying with these regulations early could offer a competitive edge, as organisations that demonstrate responsible and ethical AI use are likely to build greater trust with consumers and partners. Additionally, as the EU sets a high standard for AI governance, other regions may adopt similar frameworks, making early compliance a potential advantage in global markets.

The short-term impact of the legislation may slow the time-to-market for new AI innovations as companies work to ensure compliance with the new rules. However, over time, these regulations could help create a more level playing field, where responsible AI development is rewarded, and trust in AI systems is strengthened.

Companies can begin preparing for the Act’s implementation by conducting thorough audits of their existing AI systems, especially those that could fall into the high-risk category. Ensuring that AI models are transparent, fair, and secure will be key to meeting the compliance standards set out by the regulation. Additionally, businesses should review their data privacy and security measures, as compliance with the GDPR will be a prerequisite for adhering to the AI Act’s provisions.

Another critical area of focus should be developing robust AI governance practices. Organisations that integrate AI Trust, Risk, and Security Management (AI TRiSM) into their operations will be better equipped to manage the risks associated with AI and ensure that their systems meet the upcoming legal requirements. This involves not only technical safeguards but also fostering a culture of ethical AI use, where fairness, transparency, and human oversight are prioritised.

The road to compliance may be complex, but the potential rewards – in terms of trust, market access, and long-term competitiveness – are substantial.

Gabriele Rigon, Senior Principal Analyst in the Enterprise Operations team at Gartner

Gabriele Rigon

Gabriele Rigon is Senior Principal Analyst in the Enterprise Operations team at Gartner and is discussing this topic further at Gartner IT Symposium/Xpo taking place in Barcelona, Spain, from 4-7th of November 2024.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE