Updater
June 06, 2022 , in technology

Reader comments - assets or liabilities?

Online news publishers have been able to host readers' comments for well over a decade. Are they worth the effort? It’s not always clear, but with a good moderation plan in place, comment sections can be engaging and insightful.

Eidosmedia Moderators and AI

Moderators and AI | Eidosmedia

The internet can be a contentious place, and for much of the web’s history, that has been on full display in comment sections. From local news sites to Facebook, people are ready to weigh in with their opinions — allowing them to feel like a part of the story. However, for some publishers, the controversy stirred up by comment sections just is not worth the hassle.

According to a WAN/IFRA global survey of comment & moderation policies in 2018, publisher attitudes toward comments tend to fall into one of three categories:

  • Those who embrace comments as an engagement strategy
  • Those who see them as a necessary evil
  • Seven organizations had given up on comments

Since that study came out, more sites have opted out of comment sections. The Poynter Institute reported that The Philadelphia Inquirer removed comments from most stories in February 2021, while NJ.com did away with comments in 2020. They join the ranks of “Many of the biggest legacy media publications, including National Public Radio, Reuters and CNN, [who] haven’t allowed most comments for years.” But is that a mistake?

Comment sections: The good, the bad, and the ugly

Comments sections seem to be just as polarizing among publishers as they are for commenters, but as with so many other things in life, moderation may be the key. At their worst, comment sections can devolve into abusive threads filled with trolls and misinformation. Publishers who hope that readers will feel engaged in an informed conversation instead find that those users are shouted down by “uninformed, not to mention badly-written contributions,” according to WAN/IFRA.

While banning comments may seem drastic, publishers with budget concerns sometimes find that the time and money put into comment moderation can be a drain on limited resources. Putting that money into more news creation seems like the better investment.

However, a study at the Financial Times suggests that may be a short-sighted view. Digiday reports, “After spending months analyzing the behavior of people who comment on its site, the newspaper has found that those who leave comments read more articles, spend more time on site and come back to the FT more often than those who don’t.” In fact, users who comment are reportedly seven times more engaged than those who avoid the comment sections.

Publishers on the fence about comment sections may need to think long and hard about the pros and cons of comment sections and whether comment moderation may be worth the investment.

Comment moderation best practices

One way to alleviate some of the guesswork around comment moderation is to implement a set of best practices and policies that guide moderators. Here are some considerations for publishers:

Reconsider the role of anonymity

Anonymous internet trolls have often been the boogeyman of comment sections. In fact, Facebook made this a keypoint talking point, as The Conversation reports: “Back in 2011, Randi Zuckerberg, sister of Mark and (then) marketing director of Facebook, said that for safety’s sake, ‘anonymity on the internet has to go away.’” More than 10 years later, we know that Facebook is not any more civil just because people, ostensibly, use their real names when engaging online.

And, in fact, the same article from The Conversation reports that anonymity with limits can actually enhance discourse in some news settings: “There was a great improvement after the shift from easy or disposable anonymity to what we call ‘durable pseudonyms.’ But instead of improving further after the shift to the real-name phase, the quality of comments actually got worse – not as bad as in the first phase, but still worse by our measure.”

At The FT, only registered subscribers are able to comment and the team still takes an active role in moderating the discussion — publishing only the most insightful comments to help foster a better culture of dialog.

Preparation is key

Publishers who take comment moderation seriously do not just “wing it.” At The New York Times, the moderation team takes turns moderating comments on a story during the first 24 hours after publication. According to “Reporting the Community Beat: Practices for Moderating Online Discussion at a News Website,” “Moderators prepare to open an article for discussion by reading it to get a sense of what might be on-and-off topic, to ‘trouble shoot what commenters may be saying about an article.’”

Comment moderation is a team sport

Moderating a discussion is not just about shutting down abusive comments or ideas the moderator finds objectionable — and that’s why the NYT team often engages in discussion. According to “Reporting the Community Beat,” “When moderators shift onto an article that has been open for a while, they read the article, but also spend time learning about the current state of the discussion. The Community Team uses several collaborative technologies to coordinate their efforts around all of the open discussions (e.g., Slack, Google Sheets).”

Pre- or post-publication moderation

Moderators can either delete a comment after it's made or review comments before they’re published and decide whether or not they should be suppressed. According to the WAN/IFRA report, “There was a relatively even split between those that moderate pre- and post-publication: 38 and 42 respectively, with 16 adopting a mixed approach.”

Moderation norms

News outlets looking to implement a moderation plan can learn a lot from their peers. The WAN/IFRA study found that, on average, news organizations delete about 11% of comments and the primary reasons are that the content:

  • is generally offensive
  • contains hate speech or bad language
  • is SPAM

Use of AI in online content moderation

If undertaking the task of content moderation in the comment section of a busy online news organization still seems like a monumental task, that’s because it is. However, there is a possible solution for media outlets with fewer human resources to spare: AI.

As with so many other uses for AI, the technology will not eliminate the need for human moderators but can greatly reduce the burden on the team. Appen, developers of an AI-driven moderation tool already in use in a variety of social media and e-commerce settings, comments: “Computers categorize users with a history of posting spam or explicit content as ‘non-trusted’ and apply greater scrutiny toward any future content they post. Reputation technology also combats fake news: computers are more likely to label content from unreliable news sources as false.” However, people may still be needed to “provide ground truth monitoring for accuracy and handle the more contextual, nuanced content concerns.”

Armed with the right guidelines and technology, content moderation teams can help make comment sections what they should be — a place for thoughtful, engaging discussions among invested readers and not just a place for trolls to wreak havoc.

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH