In March, Twitter introduced a new approach to improving the tone of public conversation on the social media platform. An important element that is now being tackled is the so-called ‘trolls’.
Troll behaviour can be fun, well intention and humorous. But troll behaviour can also disrupt and distract public conversation on a social media platform, especially in conversations and within the search function. Some of these tweets and accounts violate the policies of Twitter, which requires action. Another part does not violate the rules, but it is interference transmitters in conversations between users.
To put this in context; less than 1% of all accounts constitute the majority of accounts reported. By contrast, a large proportion of these reported accounts do not violate Twitter rules. Although the accounts are not a large quantity in total, these accounts do have a disproportional and negative impact on the experience of people on social media. The challenge is therefore: how can Twitter proactively deal with disruptive behaviour, which does not violate the rules but does have a negative influence on the tone of a conversation.
Nowadays, by means of policy, manual assessment processes and machine learning, it is determined how Tweets are organized and presented in, for example, conversations and search functions. The priority for Twitter now is to tackle problems such as behaviour that distracts and disrupts public conversations.
The integration contains many new signals, most of which are not externally visible. A few examples are when an account has not confirmed its email address, when the same person is registering multiple accounts at the same time, accounts that repeatedly tweet and mention accounts that do not follow them, or behaviour that may indicate a possible coordinated attack. Twitter also looks at how accounts are connected to other accounts that do violate the rules and how the interaction between the accounts is.
The signals form an instrument in determining how conversations and the search feed are organised. Because this content does not violate Twitter’s policies, it remains available when the user clicks on ‘Show more answers’ or chooses to view everything in the search feed. The result is that people who contribute to a healthy conversation are more visible in conversations and the search feed.
The results of the first international testing round show that the new approach has a positive influence. The abuse reports from the search function decreased by 4% and the reports from conversations by 8%. This means fewer people see Tweets that disrupt their user experience.
But there is still a long way to go. Twitter’s approach is only part of all the work to improve the tone of the conversation and everyone’s experience. Over time, technology and team will make mistakes, but they will also learn from them. There will be false positives and things they overlook. Twitter hopes to learn quickly and to make the processes and tools smarter. In the meantime, the company is trying to be transparent about the mistakes that are being made and the progress that is being made.
The results achieved so far are encouraging. Nevertheless, Twitter realizes that this is only one step in a longer process to improve the overall health of the service and the experience.
I am sure this post has touched all the internet visitors, its really really fastidious paragraph on building up new webpage.
My brother suggested I might like this website.
He was totally right. This post actually made my day on Social Media.
You can’t imagine simply how much time I had spent for this information! Thanks!
Hello, just wanted to tell you, I liked this post.
It was helpful. Keep on posting!