Social media company Twitter has launched a filter to verified iOS users to keep out abusive tweets and tweets that give threats of violence at bay, according to a report in in TechCrunch.
The ‘quality filter’ is only available to some verified users on iOS. But the company’s official blogpost says that they would update their policy on dealing with abuse. The blogpost had pointed out that Twitter violent threat policy now “extends to ‘threats of violence against others or promot[ing] violence against others.’
The blog post said, “Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.”
The post also mentioned the new filter feature stating, “We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive.”
The new feature will remove all tweets from the notifications timeline that contain death threats, abusive language, duplicate content or those that are sent from suspicious accounts. Twitter’s general counsel had earlier noted in a Washington Post column that the company will up its anti-abuse measures. “We need to do a better job combating abuse without chilling or silencing speech,” Gadde said. We are also overhauling our safety policies to give our teams a better framework from which to protect vulnerable users,” he wrote.
“As some of our users have unfortunately experienced firsthand, certain types of abuse on our platform have gone unchecked because our policies and product have not appropriately recognized the scope and extent of harm inflicted by abusive behavior,” Gadde said. “Even when we have recognized that harassment is taking place, our response times have been inexcusably slow and the substance of our responses too meager. This is, to put it mildly, not good enough.”
The social media site already provides options to mute and block users. But that has not really curbed violent threats and hate mails over the 140-word instant messaging site.
The ‘quality filter’ is only available to some verified users on iOS. But the company’s official blogpost says that they would update their policy on dealing with abuse. The blogpost had pointed out that Twitter violent threat policy now “extends to ‘threats of violence against others or promot[ing] violence against others.’
The blog post said, “Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.”
The post also mentioned the new filter feature stating, “We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive.”
The new feature will remove all tweets from the notifications timeline that contain death threats, abusive language, duplicate content or those that are sent from suspicious accounts. Twitter’s general counsel had earlier noted in a Washington Post column that the company will up its anti-abuse measures. “We need to do a better job combating abuse without chilling or silencing speech,” Gadde said. We are also overhauling our safety policies to give our teams a better framework from which to protect vulnerable users,” he wrote.
“As some of our users have unfortunately experienced firsthand, certain types of abuse on our platform have gone unchecked because our policies and product have not appropriately recognized the scope and extent of harm inflicted by abusive behavior,” Gadde said. “Even when we have recognized that harassment is taking place, our response times have been inexcusably slow and the substance of our responses too meager. This is, to put it mildly, not good enough.”
The social media site already provides options to mute and block users. But that has not really curbed violent threats and hate mails over the 140-word instant messaging site.