In order to reduce hate comments that cause discomfort in the community, twitch released a Shield Mode that will help curb that situation by blocking forbidden words.
This is a security tool that is added to others that the streaming platform has, although it is much more complete because it is based on combining different alternatives, giving the option of configuring the settings in a personalized way.
In this way, it will be the users themselves who organize the configuration to their liking so that in the chat of their transmission, alerts of this type of message are blocked or launched, in addition, the moderators will also have powers to help comply with this management.
It may interest you:
How is the Twitch shield mode
Several of the functions of this new tool are already known to users of the platform, but that is precisely what the new option seeks: to bring together all the security elements and present them in a much simpler configuration format.
So in this option functions such as channel modes will appear, so that in the chat there are only followers or subscribers, verification options, with phone or mail, and AutoMod levels.
This mode can be opened from the channel, the Broadcast Manager or the Modulation View, where all the functions to customize the content moderation according to the criteria of each content creator will appear.
In addition, for the chat there are commands such as ‘/shield’ and ‘/shield deactivated’, with which the previous configurations can be activated and that the moderator can also see them.
When the shield is enabled, there is the possibility of restricting certain words or expressions, so if a user uses them they will be blocked. This is configured by going to the ‘Terms and Phrases’ option.
Additionally, content creators will have access to a list with blocked profiles, where they can delete them, keep them that way or directly inform twitch about the situation.
All this allows creators to build a community tailored to their type of stream, to prevent hateful comments, especially when dealing with sensitive topics or that will generate controversy due to political, religious, nationality, race or sexual orientation divisions.
It may interest you:
Twitch safety for minors
The streaming platform announced through its website that it is beginning the process of identifying accounts belonging to users under 13 years of age for security reasons.
Yes ok twitch you are attempting to remove these users, this does not imply neglecting those that were not detected. In its official publication, the streaming service indicated that it will have an update in its moderation policies to give priority to all reports that are registered as coming from accounts of minors.
The option to block messages from strangers was also enabled in the case of users in general. Similarly, searches for “certain terms” were prohibited by the platform, although it did not indicate which terms it refers to.
In addition, the application indicated that it has deepened its collaboration with institutions that report to the platform on user behavior, which involves those practices related to the harassment of minors in other sectors so that the ability to monitor behavior can be increased. of users showing signs of harassment.