GameSpot may receive revenue from affiliate and advertising partnerships for sharing this content and from purchases through links.

Game Devs Are Turning To AI To Fight Mean Jerks In Voice Chat

A new Unity product called Safe Voice is launching in closed beta and will use AI monitoring to help moderators identify problems.

14 Comments

Unity Technologies has announced a new tool for its developer suite that uses AI to help devs identify toxicity in online games. The new Safe Voice tool is launching in closed beta and is aimed at letting studios isolate and review toxicity reports quickly. Unity says that Hi-Rez's Rogue Company took part in early testing for the feature, and it has continued to use the tool now as it enters its beta period.

Safe Voice is said to analyze aspects like tone, loudness, intonation, emotion, pitch, and context to identify toxic interactions. It activates when a player flags an issue with a behavior, and then starts monitoring and delivers a report to human moderators. That overview dashboard will let moderators review individual incidents as well as see trends over time to help its moderation plans. Unity also says this is the first in a larger suite of toxicity solutions it has coming.

"It's one of the number one reasons that people leave a game and stop playing because there's some sort of bad situation around toxicity and other elements of abuse," Mark Whitten, Unity president of Create Solutions, told GameSpot.

Hi-Rez Studios announced a Unity partnership for a new voice chat recording system in February, when it issued the update that started testing the new tool. In the Safe Voice announcement, Rogue Company lead producer said the tool has been helpful in identifying and mitigating problems before they escalate.

In the early testing phase, Unity was testing to make sure that it was accurately flagging problems and shortening the time that humans needed to be involved. To that end, Whitten said, it was very successful. Game developers that would typically get tens of thousands of reports in a given period were able to narrow those reports down quickly and prioritize the ones most likely to be problematic. And though automation has been a hot topic in tech lately, Whitten says this tool is meant to help take a load off of the human moderation teams.

"I think this is an efficiency gain and not a replacement scenario," Whitten said. "That said--and I ran Xbox Live for many years--any day that I could replace a human who has to deal with looking at inappropriate things, I would happily do it. It's not fun work, that's not work that you really want people doing, putting them in the midst of looking at a bunch of bad behavior all day. You'd be much better off having screens that caught some of that and then allow them to take actions based on the screens instead of having to be the screener itself."

After that screening process feeds data to the moderators, the developers decide what action needs to be taken. As with all game moderation, it's up to individual studios to outline their policies and make sure that punishments are consistent. Finally, to protect user privacy, players will have to opt in to voice recording separately from any other online play agreements.

"Data is anonymized in the Unity databases," Whitten said. "It's connected to the player identity in the game so they can take moderation action if necessary, and then it's deleted off the services."

Game publishers have been looking to combat online toxicity in a variety of ways. Ubisoft gave players a way to contact local police, and last year announced it was teaming up with Riot to research an AI anti-toxicity project. Most recently, Microsoft announced it would allow Xbox Live users to share voice clips with moderators.

Got a news tip or want to contact us directly? Email news@gamespot.com

Join the conversation
There are 14 comments about this story