EU Reached Agreement on Artificial Intelligence Law After 38 Hours of Talks

An agreement was reached on the Artificial Intelligence Law, which aims to use artificial intelligence transparently, safely and without disinformation.
 EU Reached Agreement on Artificial Intelligence Law After 38 Hours of Talks
READING NOW EU Reached Agreement on Artificial Intelligence Law After 38 Hours of Talks

We constantly use models such as ChatGPT and Midjourney and create different content for you. These tools, which are available to everyone around the world, have been constantly tried to be restricted since the day they were first released. Here, we first saw that countries are trying to draw certain limits.

China, England and the United States of America (USA) were the countries that wanted to take the biggest step in this regard and they are still behind these steps. However, it should not be forgotten that the EU has been working for a long time for laws that will ensure the safe use of this artificial intelligence – in a sense that varies from country to country. Because they have taken the biggest step of the Artificial Intelligence Law.

After 38 hours of negotiation, an agreement was reached.

The talks, which focused on models such as ChatGPT on Thursday, gave way to systems such as facial recognition the next day, and officials also talked about them.

The most fundamental basis of the law in question is to make the development of artificial intelligence safe. This law, the first steps of which we saw in 2021, started to come more to our agenda as ChatGPT gained momentum. These laws will impose stricter rules the riskier the AI ​​model/tool ​​in question. In addition, clear criteria that determine the difference of artificial intelligence systems from simple software will be listed.

Updates for both military and law enforcement sides:

On the military side, it will be ensured that artificial intelligence-supported tools do not affect the security competencies of member countries or the organizations responsible for them. Law enforcement forces will also be able to benefit from artificial intelligence. In this context, if permission is obtained, remote biometric identification systems can be used in public areas.

Risk classifications are also available.

On the other hand, even if the artificial intelligence tool in question is not in the high risk group, it will clearly inform that the content was created with artificial intelligence in order to keep the user conscious.

The use of some artificial intelligence tools that have been labeled as high risk will be allowed. However, these vehicles will have to meet the specified requirements in order to enter the EU market. Of course, ensuring data security comes first.

But for some, there is also a stigma of unacceptable. This means that vehicles falling into this risk group will be banned within the EU. In this context, cognitive manipulation, obtaining facial information from the internet or closed camera system images, recognition of emotional signs in the workplace and educational institutions, social scoring, biometric classification to extract sensitive data such as sexual orientation or religious beliefs will be prohibited.

The situation for general use artificial intelligence tools is as follows:

The basic models that feed these tools must offer a certain level of transparency before entering the market. EU on its website; He states that very complex, huge and high-performance models may pose a risk. That’s why transparency will be essential to a certain extent.

Of course, a separate office must be opened to control these.

For this reason, an artificial intelligence office was established to control the most advanced artificial intelligence models. This office, which will benefit from the consultancy of independent experts, will both see security risks and share ideas for improvement.

Of course, a management team located above the office will also have a say in the implementation of the regulations. An advice forum, where leading names in the industry can take part, will also be under the constant eye of the management.

The European Parliament will vote on the Artificial Intelligence Law early next year.

However, we will not see any of the regulations put into practice before 2025.

So what will be the penalty if this law is not followed?

There are different scenarios if the law is violated. If prohibited practices are involved, 7 percent of the company’s annual revenue or 35 million euros; if the criteria are not met, 3 percent of the annual revenue or 15 million euros; if false information is spread, 1.5 percent of the annual revenue or 7.5 million euros. There are penalties of . In the parts where we say “or”, it will be checked which one is higher.

What are your thoughts about this law?

Comments
Leave a Comment

Details
168 read
okunma40772