Especially after the slow internet, the thing that bothers the online players the most is the presence of a toxic player in your team or the opposing team. These types of players usually enjoy disturbing and alienating everyone but themselves rather than playing or winning the game.
Ubisoft and Riot rolled up their sleeves and developed a new artificial intelligence to permanently stop these toxic players. The data obtained by this artificial intelligence will be shared with all game companies.
Determined to end toxic acting
Ubisoft, the producer of games such as Rainbow Six Siege and Division 2, and Riot Games, the producer of League of Legend and Valorant, announced their project called “Zero Harm in Comms” in a joint statement today. Within the scope of this project, a special artificial intelligence system will be added to the online games of both brands by examining these artificial intelligence text chats and detecting toxic players.
The artificial intelligence, which will be trained using machine learning over the messages of the toxic players reported in the first place, will detect these toxic players before they are reported and collect data such as the players’ usernames, IP addresses and MAC addresses of their devices. Later, this information will be shared with all companies in the game industry and support will be provided so that companies can detect these toxic players at the very beginning of the game.
Thanks to this artificial intelligence, Ubisoft and Riot Games aim to avoid toxic players in the game industry and to a game industry where everyone is understanding and curses do not fly in the air. Representatives of the two companies also announced that they wanted to expand the buy experimental project by inviting other companies in the game industry to join this system.
Do you think this project of Ubisoft and Riot can really finish off toxic players?