Following a highly contentious U.S. presidential election that saw both candidates and their supporters use Twitter as a major platform for expressing their views, the micro-blogging site is rolling out new policies and tools to combat abusive online behavior.
The company announced the changes in a blog post earlier today. In addition to an updated policy on hateful conduct, Twitter is also expanding its previous “mute” feature to enable users to avoid seeing notifications for specific keywords, phrases and conversations.
First introduced in 2014, Twitter’s mute function allows users to avoid unwanted content without actually blocking the other tweeters. Blocking informs the person being blocked, while muting does not, so mute can help reduce the likelihood an abusive person will simply create a new Twitter account and continue directing comments at a target.
Many Requests for Expanded ‘Mute’
In today’s blog post, Twitter acknowledged that many users have been requesting expanded mute capabilities to bar abusive content from others. “We’re going to keep listening to make it better and more comprehensive over time,” the company said.
Twitter’s new hateful conduct policy is aimed at making it easier for users to report a wide range of abusive behavior “in a more direct way,” according to the post. It will also improve Twitter’s ability to process reports of online abuse, which “helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.”
Along with the new mute features and updated policies, Twitter has also responded to reports of abuse by retraining its support teams with a special focus on “cultural and historical contextualization of hateful conduct.” What’s more, the company has improved its internal tools and systems to enable support personnel to respond to abuse reports faster, more effectively and more transparently.