news-16112024-181756

The European Union has introduced new regulations regarding artificial intelligence, known as the EU AI Act, to foster trust among citizens and ensure that AI technologies remain “human-centered.” The goal is to drive the uptake of AI and grow a local AI ecosystem by reducing the risks associated with AI applications.

The AI Act categorizes AI uses into different risk levels. Unacceptable risks, such as harmful techniques or deceptive practices, are banned. High-risk uses, such as AI in critical infrastructure or healthcare, require conformity assessments before deployment. Medium-risk uses, such as chatbots, are subject to transparency requirements.

The Act also includes specific requirements for general purpose AI models (GPAIs), which are foundational models used in various AI applications. There are lighter requirements for commercial GPAIs, including transparency rules and disclosures. More powerful GPAIs with systemic risks must undergo proactive risk assessment and mitigation.

The compliance deadlines for different components of the AI Act vary, with some starting six months after the Act’s entry into force and others extending to mid-2027. The staggered approach allows companies to prepare for compliance and gives regulators time to develop guidance and clarify definitions.

Enforcement of the AI Act involves penalties of up to 3% of a model maker’s global turnover for GPAIs and up to 7% of global turnover for breaches of banned uses. Member state-level authorities will oversee compliance with the Act’s rules for AI systems, with penalties of up to 3% of global turnover for violations.

Overall, the AI Act aims to balance innovation and trust in AI technologies within the EU. While the regulations may evolve as technology advances, they provide a framework for responsible AI development and deployment. As companies work towards compliance, regulators will play a crucial role in ensuring that AI applications adhere to the law’s requirements.