Ubisoft and League of Legends maker Riot announced a new initiative meant to help reduce toxicity in online game chats. Officially called the Zero Harm in Comms project, the research will improve both companies’ artificial intelligence with the aim of creating a “cross-industry database and labeling ecosystem” that will make it easier for AI to identify potentially harmful in-game behavior before it happens.
“We are exploring how to better prevent in-game toxicity as designers of these environments with a direct link to our communities,” Yves Jacquier, executive director at Ubisoft La Forge, said in a statement. “Disruptive player behaviors is an issue that we take very seriously but also one that is very difficult to solve. At Ubisoft, we have been working on concrete measures to ensure safe and enjoyable experiences, but we believe that, by coming together as an industry, we will be able to tackle this issue more effectively.”
While the two companies may seem like an unusual combination, the rationale behind the project is that between Ubisoft’s wide variety of online games and the intensely competitive nature of Riot’s games, the initiative should be able to capture a broad range of problematic behavior and language.
Both companies will share their research with the broader industry in 2023, presumably at the Game Developers Conference, though the statement didn’t provide an exact date.
The project seems poised to make a difference in the game experience for players. Now someone just needs to figure out how to protect the development teams themselves from fan harassment online.
Written by Josh Broadwell on behalf of GLHF
[mm-video type=video id=01g27evrcp221tq5y327 playlist_id=none player_id=none image=https://images2.minutemediacdn.com/image/upload/video/thumbnail/mmplus/01g27evrcp221tq5y327/01g27evrcp221tq5y327-de431fdadacd7ae4a645efc939105ec5.jpg]
[listicle id=1958755]