European Union passed artificial intelligence law

European Union passed artificial intelligence law

The EU adopted artificial intelligence rules. While the legislation places the first restrictions on productive AI, companies are concerned that regulation has gone too far. According to the law, artificial intelligence that manipulates human behavior or exploits human vulnerabilities will be banned.

The European Parliament (EP) has approved new legislation that will impose strict rules on various artificial intelligence technologies such as ChatGPT and Gemini.

At the General Assembly session held in Strasbourg, the “Artificial Intelligence Law”, which will bring the first comprehensive rules for artificial intelligence in the world, was accepted with 523 “yes” and 46 “no” votes.

According to the law, artificial intelligence systems to be used in European Union (EU) countries must be safe and respect fundamental rights.

Artificial intelligence systems will be regulated on a risk-based basis according to the possibility of harming society. The risk in some uses of artificial intelligence will be deemed unacceptable and the use of these systems will be banned.

Retrieving facial images from the internet or CCTV, emotion recognition in the workplace and educational institutions, social scoring, biometric classification to extract sensitive data such as sexual orientation or religious beliefs will be prohibited.

Artificial intelligence that manipulates human behavior or exploits human vulnerabilities will be banned. Law enforcement forces will be able to use artificial intelligence in some cases. In exceptional cases such as preventing terrorist attacks or identifying missing persons, law enforcement units will be able to use real-time remote biometric identification systems in public areas with legal permission.

Stricter rules will be imposed on high-risk artificial intelligence systems. Artificial intelligence systems in critical infrastructure, education, employment, healthcare, public services, banking, immigration and border management, and democratic processes such as elections will be in the high risk category.

In such systems, risks will need to be assessed and mitigated, usage records kept, transparency and human oversight provided. It will be ensured that it complies with transparency criteria
Special rules will be imposed on large systems that can perform a wide range of different tasks, such as creating video, text, images, speaking another language, calculating or writing computer code. These “general-purpose artificial intelligence” systems will be ensured to comply with various transparency criteria before being released to the market.

AI-altered audio or video content will need to be clearly labeled. The law will enter into force 2 years after its approval by the EU Council and its publication in the EU Official Journal. With the new law, various artificial intelligence technologies such as Google’s artificial intelligence model Gemini and ChatGPT will have to comply with the new rules.

Visits: 39