Online news trolls not as bad as we think

Many people are turned away by abusive language on online news sites but new research reveals that only 15 per cent of comments are “nasty.” Continue Reading Online news trolls not as bad as we think

Toxicity online seems pervasive. We encounter and hear about all manner of obscene language, insults and slurs. Some of it even comes from U.S. President Donald Trump. Much of his long list of attacks against people, places and things has been unfurled online.

Mhairi Black, a British MP, speaking of the online abuse she receives on a daily basis, says it is anything but entertaining: “I struggle to see any joke in being systematically called a dyke, a rug muncher, a slut, a whore, a scruffy bint.” Her speech made headlines in the United Kingdom because it was the first time anyone had used the c-word in the House of Commons.

The work that we do within my research group looks at a large dataset of comments posted in response to online articles in the Globe and Mail, the main English daily in Canada. We discovered that, although there is certainly some trolling, there are also a significant amount of constructive comments in news articles.

The comments on the Globe and Mail site are typically related to the article, provide evidence to support an opinion and are specific. In general, we found them to be “nice” and not “nasty” comments. And more than just nice, they promote meaningful conversations, stay on topic and engage with other readers.

In fact, our research has shown that only about 10-15 per cent of online news comments are toxic, containing offensive language, insults or attacks. No matter how small or large the percentage, news organizations and online platforms are continuously engaged in efforts to minimize the toxicity level of comments posted on their sites.

This is important, because many people are turned away by abusive language online, even if it is only a small fraction of all content.

In a recent New York Times readers’ survey, some said they believe reading the comments is valuable because doing so exposes you to views that are different from your own.

We do not know if anyone will change their point of view as a result of reading the comments, but perhaps they will understand that other points of view can be valid. Some are working to introduce the idea that we all benefit when perspectives are challenged and commenters are forced to rigorously interrogate and defend their views.

Being exposed to only certain points of view leads to a “spiral of silence”— a situation where individuals are afraid to voice what they consider a minority opinion, which then reinforces that opinion as marginal.

Promoting meaningful conversations

Most news sites prominently display community guidelines, and sometimes ask readers to confirm that they will abide by them before posting. Until it discontinued reader input late last year (input which has recently been reinstated), the Globe and Mail asked readers to review other comments and assess them for civility.

The New York Times selects “NYT Picks,” posts that their moderators find particularly insightful or that display a range of viewpoints. Some organizations have decided to ban comments altogether, finding that they contain too much offensive material, or that the work of moderating them is too onerous.

The Canadian Broadcasting Corp. does not allow feedback on stories about Indigenous issues, because they “draw a disproportionate number of comments that cross the line and violate our guidelines.” The Tyee has appealed to its readers to help them “stay out of the muck.”

It does seem that every news organization, and online content providers in general, are grappling with how much user-generated content to allow and how to moderate it. Even Twitter is asking academics to help them foster healthier conversations.

Can anyone become a troll?

Previous research also shows that toxicity is widely distributed across the population of online users. For example, a study of Wikipedia talk pagesfound that only about 20 per cent of abusive comments are written by established trolls. The other 80 per cent are one-offs, people having a bad day. So, yes, even you or I can become a troll.

This poses a particular challenge, as most efforts are directed at banning specific individuals from platforms. While that is necessary, and partly effective because of a “contagion effect”, that is, the idea that negative ideas are more influential than positive ones, it is more important to educate all users of a platform.

In other words, banning the few bad apples will not be enough to stop the flow of toxicity.

It may feel like you encounter nasty and toxic language frequently. And, indeed, in some forums (for instance, some subreddits) that is the case. But, as our research reveals, most news comments are relevant to the article, provide evidence to support the views expressed and contribute to the conversation.

We are now developing algorithms that will automatically identify constructive comments, so that we can promote them, and identify toxic ones, so that we can filter them out. One possibility would be to promote good comments by displaying them more prominently, over rants and personal attacks.

This research can also be used to help commenters as they write. Imagine if you could run your comment through an algorithm that will tell you whether it is constructive or not. Such systems already exist to stamp out toxicity, although they don’t always work well.

Our algorithm will tell you whether you are contributing to the conversation. Then we may have more of the meaningful and less of the nasty.

This story was originally posted on The Conversation Canada and is republished here with the editor’s permission.

mp

Professor of Linguistics, Simon Fraser University.