Twitter’s redesigned approach for reporting policy violations on its platform is now widely available, the company has revealed. It’s designed to handle everything from information about misinformation and spam to harassment and hate speech.
The redesign has been in testing since December last year and uses a so-called “symptoms-first reporting flow” designed to make reporting bad behavior easier. It’s now available in “most countries” across the web, iOS, and Android.
The company outlined how the new process worked in a blog post last year. Previously, Twitter’s process would ask which policy had been broken and then ask for more details. Instead, the new flow asks for more information on what’s happened before reaching more granular about which rules may have been broken.
Twitter’s blog post analogized the new process to a doctor asking, “Where does it hurt?” rather than immediately leaping to specifics.
Early results from the testing have been positive. For example, Twitter states the number of “actionable reports” increased by 50 percent due to the unique process.
The rollout of the new reporting flow comes as Elon Musk’s attempted takeover of the platform has renewed scrutiny on Twitter’s moderation policies. Musk’s position as a “free speech absolutist” suggests he’d want the company to take a much more relaxed approach to content moderation under his ownership.
But with the Tesla CEO making frequent threats to scrap the deal, it’s still unclear when, or even if, the acquisition may be completed.
Twitter’s objective is to serve the public conversation. Unfortunately, violence, harassment, and other similar conduct discourage people from expressing themselves and ultimately diminish the value of general global discussion. Our rules ensure all people can participate in public conversation freely and safely.
We may sometimes add a notice to an account or Tweet to give you more context on the actions our systems or teams may take. In some instances, this is because the behavior violates the Twitter Rules. Other times, it may be in response to a valid and properly scoped request from an authorized entity in a given country.
You can report instantly from an individual Tweet, List, or profile for specific violations, including spam, abusive or harmful content, inappropriate ads, self-harm, and impersonation.
People use Twitter to show what’s happening worldwide, often sharing images and videos as part of the conversation. Sometimes, this media can depict sensitive topics, including violent and adult content.
We recognize that some people may not want to be exposed to sensitive content, which is why we balance allowing people to share this type of media with helping people who want to avoid it.
For this reason, you can’t include violent, hateful, or adult content within prominent areas on Twitter, including in live video, profile, header, or List banner images. If you share this content on Twitter, you must mark your account as sensitive. Doing so places pictures and videos behind an interstitial (or warning message) that needs to be acknowledged before your media can be viewed.
Using this feature means that people who don’t want to see sensitive media can avoid it or make an informed decision before viewing it. We also restrict the specific acute press, such as adult content, for viewers who are under 18 or viewers who do not include a birth date on their profile.
Under this policy, there are also some types of sensitive media content that we don’t allow at all because they have the potential to normalize violence and cause distress to those who view them.