Starting this week, everyone on Twitter now has the option to “hide replies” on their tweets, a feature the company started testing earlier this year and one of several new ideas to improve the state of conversation on the platform. There are other, bolder ideas potentially coming down the pike, too: an option to disable retweets, remove an @ tag, or disallow people to @ mention them without permission.
Collectively, the moves represent Twitter’s latest thinking around “healthy conversations,” an initiative it launched in March 2018. “We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers,” Jack Dorsey, the company’s CEO, wrote at the time. Starting then, he said, Twitter would begin “building a systemic framework to help encourage more healthy debate, conversations, and critical thinking.”
For Twitter, much of this work began with undoing its own missteps. Chris Wetherell, the developer who created Twitter's retweet function in 2009, has since compared it to handing “a four-year-old a loaded weapon.” Twitter’s cofounder, Biz Stone, has called the Mentions tab—which “put the onus on users to block someone”—a mistake. For many of Twitter’s users, though, the product changes have felt painfully slow and underwhelming.
Take “hide replies.” The feature lets someone bury an individual reply to their tweet—like, say, a crude joke or a reply promoting an Ethereum scam. These replies get hidden for everyone, although they can still be seen by clicking “more replies.” In that sense, it’s more like batting away an annoying fly—it’s still buzzing around somewhere, even if it’s not directly in your ear. Twitter users also need to take action on a tweet-by-tweet basis, “which might seem easy at a small scale, but not so much when users are being massively dog-piled,” says Patrícia Rossini, who researches communications and social media at the University of Liverpool.
Rossini likens “hide replies” to marking an email as spam: good for one-off incidents, bad for problems at scale. For a reply that’s merely annoying, hiding might be enough. But for users who are being attacked, harassed, or otherwise targeted, it won’t do much to fix the problem. “I would also be curious to learn the extent to which hiding tweets prevent bystanders to click on them and read them, which could give us a better sense of whether the hiding feature improves conversational dynamics,” says Rossini. “My initial sense is that it may help small-scale conversations, but is likely not enough for more serious cases of targeted attacks and harassment.”
Harassment and abuse are major problems on Twitter—that’s part of the reason the platform started the work on “healthy conversations” in the first place. Women and minorities are particularly affected. One study last year found that an abusive tweet was sent to a woman roughly every 30 seconds. (The study was based on crowdsourced data from a “troll patrol,” since Twitter does not break down reports of abuse by victim categories.) Recently, Twitter has refined its “quality filters” to weed out toxic tweets and given users more options to report abuse.
“Hide replies” represents a gentler approach, geared toward steering conversations in less toxic directions. Twitter says that in Canada, where it tested the feature initially, 27 percent of people said they “would reconsider how they interact with others in the future” after they received notice that their replies had been hidden by the original tweeter. “One small step for Twitter, but one big step for Twitterkind,” wrote Dantley Davis, Twitter’s VP of design and research, when the company expanded the test in September. “Trolls beware!”
Davis has bigger ideas for cleaning up the Twitterverse—though, for now, the company hasn’t committed to building them. He’s suggested a feature that lets users remove themselves from a conversation entirely, or to disable @ mentions without permission, could prevent someone from having to deal with vicious tweets in the first place. “Having been on the receiving end of harassment, I believe both of these features could have been helpful in preventing attacks from scaling up,” says Rossini.
Each of those ideas centers on self-moderation, giving Twitter users more control of their individual experience on the platform (or, less charitably, foisting the responsibility of decency onto the users). That laissez-faire attitude fits with Twitter’s self-image as the free speech platform, but does little to address some of the major problems on a platform where conspiracy theories, hate speech, harassment, and disinformation thrive.
But even introducing small forms of moderation and control should be taken as a step in the right direction, says Dhiraj Murthy, a sociologist at the University of Texas at Austin and the author of Twitter: Social Communication in the Twitter Age. Murthy’s research has followed the way far right groups target minority groups on Twitter, often by spamming their tweets with racist replies. “Allowing hide and mute functions could provide these types of users a level of content moderation control in terms of how their profile is being presented to the world,” he says. “Of course, this does not abrogate Twitter and other social media platforms from actively looking for content that needs to be moderated,” but it does give some power to users who might otherwise feel helpless.
- The strange life and mysterious death of a virtuoso coder
- Wish List 2019: 52 amazing gifts you'll want to keep for yourself
- How the climate crisis is killing us, in 9 alarming charts
- Why my friend became a grocery store on Instagram
- How to lock down your health and fitness data
- 👁 A safer way to protect your data; plus, the latest news on AI
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones.