An Artificial Intelligence Office will be established within the EU Commission, tasked with inspecting advanced artificial intelligence models, contributing to the development of standards and testing practices, and implementing common rules in all member countries.
A scientific panel of independent experts will make recommendations to the Artificial Intelligence Office. The Artificial Intelligence Board, which will consist of representatives of member countries, will work as a coordination platform and advisory body for the Commission.
Fines for violations of the AI law will be calculated based on the percentage of the offending company’s global annual turnover in the previous financial year or predetermined amounts, whichever is higher.
In case of violation of prohibited artificial intelligence applications, a fine of 7 percent of the company’s turnover or 35 million euros, in case of non-compliance with obligations, a fine of 3 percent of its turnover or 15 million euros, and in case of sharing incorrect data, a fine of 1.5 percent of its turnover or 7.5 million euros will be cut. When imposing fines, the value that costs the company more will be used.
The implementation of the law, which will come into force after its official approval by EU countries and the European Parliament, will begin 2 years after the official approval.
Two years ago, the EU Commission prepared the first legal regulation proposal containing the new framework of rules on artificial intelligence and presented it to member states and the EP. This proposal introduced some limitations and transparency rules in the use of artificial intelligence systems.
In the commission’s bill, artificial intelligence systems were divided into 4 main groups: unacceptable risk, high risk, limited risk and minimum risk. With the new law, various artificial intelligence technologies such as Google’s artificial intelligence model Gemini and ChatGPT will have to comply with the new rules.
Law enforcement and artificial intelligence
Law enforcement forces will be able to use artificial intelligence in their activities. With the emergency procedure, law enforcement will be able to deploy a high-risk artificial intelligence tool that normally cannot pass the conformity assessment procedure.
In exceptional and necessary cases, police units will be able to use real-time remote biometric identification systems in public areas with permission. The use of such artificial intelligence systems will be limited to situations such as terrorist attacks, prevention of current or foreseeable threats, and searches for people suspected of the most serious crimes.
However, a special mechanism will be put in place to ensure that fundamental rights are adequately protected against possible abuses of artificial intelligence systems.
Special rules will be imposed on large systems that can perform a wide range of different tasks, such as creating video, text, images, speaking another language, calculating or writing computer code. These general-purpose artificial intelligence systems will be ensured to comply with various transparency obligations before being released to the market.
Views: 434