And it looks like the feature will be playable in the new Safety Mode, according to a slide in the Analyst Day chipset.
Here’s Twitter’s description of how the feature would work if you turn it on: Automatically block offensive accounts that appear to be in violation of the Twitter Rules, and mute offensive accounts that may use insults, insults, vulgar language, or hate remarks.
And with the new security setting, Twitter automatically detects accounts that may behave abusive or unwanted and limits how those accounts interact with your content for seven days, according to the segment.
Twitter has historically suffered from abuse across its platform and has released a number of features over the years to help reduce offensive content, such as: allowing people to hide replies and allowing users to control who can respond to an individual Tweet.
The company also has some automated tools in place to remove abusive tweets, and in 2019 it said: it removes more than 50 percent of abusive tweets before users report them.
Besides this feature, the company has published early details about Its first ever paid product, A feature called Super Follow, which aims to combine community trends on Discord with newsletter insights for a Substack product, Clubhouse voice chat rooms, and support for creators via Patreon with a subscription.
After last year’s active shareholder action aimed at ousting CEO (Jack Dorsey), the company took some steps on the long-awaited product, buying companies and aiming to push conditions around how to leverage its network and attract new streams of revenue.
New revenue streams should undoubtedly be central to Twitter’s ambitious plan to double its revenue by 2023.