twitteryou tubeacpRSS Feed

The Finnish text analytics company Utopia Analytics is offering its Utopia AI Moderator service to one of the social media giants for content moderation to combat fraud and misinformation about the coronavirus

“Online traffic has increased with the crisis,” stated Mari-Sanna Paukkeri, CEO of Utopia Analytics. “In a precarious situation, people want to communicate, and they have the time to do so. We are aware of how big social media companies are struggling with content moderation right now. Therefore, we’re offering them help.”

COVID-19 or not, national and international reports show that online hate speech is a growing problem all over the world. For example, the council on foreign relations has stated that at their most extreme, rumours and invective disseminated online have contributed to violence ranging from lynching to ethnic cleansing.

The fact is that tech already exists to make the Internet safer. Advanced, machine learning-based moderation tools have been on the market for years. One of them is Utopia AI Moderator, which learns each online service’s unique moderation policy and is the only product that can analyse the meaning of text in any language of the world. It is able to detect hate speech, toxic content, or any other type of unwanted content before it gets published.

Utopia AI Moderator is used by newsrooms, social media services, discussion forums, and other online services worldwide. It moderates hundreds, even thousands of messages every second, in real-time. Utopia’s statistics show that typically 18–25 per cent of the news comments violate the online service’s terms, and therefore, should not be published. The share of improper content depends, for instance, on the moderation policy.

“Humans quite easily grow tired while trying to understand what they read. In a tight schedule, you might not be able to give a second glance at a comment. However, since machines don’t have feelings, they always process the text in the same way. Think of a production line: we assume a milk bottle or a car is always the same quality, and making them on a production line is the only way to achieve that quality. The same applies to moderation work,” Paukkeri added.

“AI is difficult to train and maintain,” Paukkeri noted. “Many of the products are brand new. Even though you might have a bad experience with certain tools that don’t perform well, there are also people skilled enough to build AI tools that really work. Ask about the experiences of the people and online services that are already using the tools, and be open-minded. Once you’re in production, you’ll see how your users start to learn what’s okay and what isn’t, since you give them the feedback in real-time with an advanced AI tool.”

Most Read

Latest news