First Information on EU’s Artificial Intelligence Law Received

ChatGPT, Midjourney, Dall-E… Even those who develop them are afraid of the level that these advanced assistants, called "productive artificial intelligence" in the IT world, will reach. Not to mention the fact that… The European Union is preparing a feverish bill on artificial intelligences like ChatGPT, which was recently banned in Italy for this reason.
 First Information on EU’s Artificial Intelligence Law Received
READING NOW First Information on EU’s Artificial Intelligence Law Received

Lawyers working in the European Union Parliament pressed the button after the unstoppable developments of productive artificial intelligence software. The EU’s concerns, including Italy, which for the first time in history banned an artificial intelligence software, are not superficial, as are “calls for profit-oriented private companies to get ahead of each other”.

EU Artificial Intelligence Law Co-Reporter Dragos Tudarache, who made a statement to TechCrunch about the draft law, which is expected to be voted in the EU Parliament in May, noted that they are at the final stage of negotiations.

According to the initial details, the EU’s Artificial Intelligence Law will consist of three layers:

Facebook @DragosTudarache
  • Responsibilities along the value chain in artificial intelligence production,
  • Basic models take some protection measures,
  • Addressing content issues related to artificial intelligence such as ChatGPT,

This three-layered structure consists of the main topics currently addressed by the deputies in the EU Parliament. To explain more clearly, it can be said that EU officials did not comply with the call of technology giants with a competitive approach and the call to “stop ChatGPT developments”, instead they wanted to bring the developments under more people’s control.

According to the statements of Dragos Tudarache, according to this law, companies must publicly show which models and data they train the productive artificial intelligences they develop. Legal responsibility of companies on how the model is trained; this will allow to naturally control the outputs of artificial intelligence. In addition, it seems that how companies are managed and how artificial intelligence is tested will also be explained based on the principle of transparency.

However, this principle of transparency weakens the competitiveness of companies that develop productive AI. Companies that wanted their competitors not to have information about the important stages of the product were able to put forward successful products such as ChatGPT by getting ahead of giants such as Google.

 

Comments
Leave a Comment

Details
102 read
okunma61846