A New Anti-Harassment Tool Coming From Google Jigsaw

Google's Jigsaw division has released the code for the new tool that will fight abusive messages, especially for journalists using social media.
 A New Anti-Harassment Tool Coming From Google Jigsaw
READING NOW A New Anti-Harassment Tool Coming From Google Jigsaw

Google’s Jigsaw unit is releasing the code for an open source harassment prevention tool called Harassment Manager. Designed for journalists and other public figures, the tool uses Jigsaw’s Perspective API to allow users to rank potentially abusive comments on social media platforms, starting with Twitter. After being released as source code for developers to develop, the tool will be released as a functional app for Thomson Reuters Foundation journalists in June.

Harassment Manager can now work with Twitter’s API to combine moderation options such as hiding tweet replies and muting or blocking accounts with a system of bulk filtering and reporting. Perspective checks the language of messages for levels of “toxicity” based on elements such as threats, insults, and profanity. It allows users to sort messages in a dashboard that they can deal with collectively rather than individually via Twitter’s default moderation tools, where users can choose to blur the text of the messages. So they don’t have to read every single one of them and they can search for keywords as well as use automatically generated sequences.

Harassment Manager also allows users to download a standalone report containing abusive messages, creating a trail for employers or law enforcement in case of illegal content such as direct threats. But for now, there is no standalone app that users can download. Instead, developers will be able to freely create applications containing its functionality and services, which will be launched by partners such as the Thomson Reuters Foundation.

Google has previously used Perspective for automatic moderation. In 2019, it also released a browser extension called Tune that allowed social media users to avoid seeing potentially toxic posts, and was used by many commenting platforms to support human moderation.

However, at the time of the release of Perspective and Tune, the language analysis model was historically far from perfect. Sometimes satirical content was misclassified or failed to detect abusive messages.

Unlike AI-powered moderation on services like Twitter and Instagram, Harassment Manager is not platform-side moderation. It’s apparently a ranking tool to help manage the sometimes enormous scale of social media feedback. We think it will be useful even for people far outside the field of journalism, even if they can’t use it for now…

Comments
Leave a Comment

Details
183 read
okunma30175
0 comments