So, you’ve set up your community and established some rules to serve as a starting point. The next step to creating the community you want is enforcing those rules. Automated moderation can play a large role in the process of rule enforcement and keeping your community safe, even when there aren’t always human eyes to do it for you.
This article will cover the implementation and configuration of auto moderation, both with the aid of tools Discord has readily available and with tools provided by third-party bots. Before reading on, here are some commonly used terms and definitions to help fully grasp this article:
‘Raid’ ‘Raider’ - A raid is where a large number of users will join a community with the express intention of causing issues for the community. A raider is an account engaging in this activity.
‘Alt’ ‘Alt account’ - An alt is a throwaway account owned by a Discord user. In the context of raids, these alts are made en masse to engage in raiding.
‘Self-bot’ - A self-bot is an account that’s being controlled via custom code or tools. This is against Discord’s Terms of Service. In the context of raids and moderation, these accounts are automated to spam, bypass filters or engage in other disruptive activities.
Why is Auto Moderation important?
Auto moderation is integral to many communities on Discord, especially larger servers. The security that auto moderation can provide gives users a much better experience, makes moderation of servers easier, and can prevent malicious users from joining or doing damage to your community.
Auto Moderation vs Manual Moderation
If you’re a well-established community, you likely have a moderation team in place. You may wonder, “why should I use auto moderation? I already have moderators!”. Auto moderation isn’t a replacement for manual moderation; rather, it serves to enrich it. Your moderation team can continue to make informed decisions within your community while auto moderation serves to make that process easier for them by responding to common issues in real time, faster than a real-life moderator can.
Knowing what’s right for your community
Different communities will warrant varying levels of auto-moderation. It’s important to be able to classify your community and consider what level of auto-moderation is most suitable to your community’s needs. Auto moderation rules sit on top of Discord’s Community Guidelines, which are the fundamental rules of the platform.
Below are different kinds of communities and their recommended auto moderation systems:
Smaller communities
If you run a Discord community with limited invites and every new member is known, auto- moderation will only be critical if you have a larger member count. For smaller servers, it’s recommended to have at least some auto-moderation - namely text filters, anti-spam, or Discord’s auto-moderation tool, AutoMod, which applies keyword filters.
Larger communities
If you run a Discord community that is discoverable or has public invites, it’s strongly recommended to have both anti-spam and text filters or have AutoMod keyword filters in place. Additionally, you should be implementing some level of member verification to facilitate the server onboarding process. If your community is large, with several thousand members, anti-raid functionality may become necessary. Remember, auto-moderation is configurable to your rules, so keep this principle in mind when deciding what level of automation works best for your community.
Verified and Partnered communities
If your Discord community is Verified or Partnered, you will need to adhere to additional guidelines to maintain that status. Auto moderation is recommended for these communities in order to feel confident that you can succinctly and effectively enforce these guidelines at all times. Consider using anti-spam and text filters or AutoMod keyword filters. If you have a vanity URL or your community is discoverable, anti-raid is critical in order to protect your community from malicious actors.
Built-in moderation features
Some of the most powerful tools in auto-moderation come with your community and are built directly into Discord. Located under the server settings tab, you will find the moderation settings. These settings can help secure your community without the elaborate setup of a third-party bot. The individual settings will be detailed below.
AutoMod
AutoMod is Discord’s automatic content moderation feature that allows those with the “Manage Server” and “Administrator” permissions to set up keyword and spam filters. These filters can automatically trigger moderation actions such as blocking messages that contain specific keywords, blocking spam from being posted, and logging flagged messages as alerts for you to review.
This feature has a wide variety of uses within the realm of auto-moderation, allowing moderators to automatically log malicious messages and protect community members from harm, spam, and words like slurs or severe profanity. AutoMod’s abilities also extend to messages within threads, text-in-voice channels, and forum channels. AutoMod can provide peace of mind for moderators.
Setting up AutoMod is simple. First, make sure your server has Communities enabled. Then, navigate to your server’s settings and click the AutoMod tab. From there, you’ll find AutoMod and can start setting up keyword and spam filters.
Keyword Filters
Keyword filters allow you to flag and block messages containing specific words, characters, and symbols from being posted. You can set up one “Commonly Flagged Words” filter, along with up to 3 custom keyword filters that allow you to enter a maximum of 1,000 keywords each, for a total of four keyword filters.
When inserting keywords, you should separate each word with a comma like so: Bad, words, go, here. Matches for keywords are exact and aware of whitespace. For example, the keyword “Test Filter” will be triggered by “test filter” but not “testfilter” or “test” Do note that keywords also ignore capitalization.
To have AutoMod filter messages containing words that partially match your keywords, which is helpful for preventing users from circumventing your filters, you can modify your keywords with the asterisk (*) wildcard character. This works as follows:
*cat - flags “bobcat” or “copycat”.
cat* - flags “catching” or “caterpillar”.
*cat* - flags “scathing” or “locate”
Be careful with wildcards so as to not have AutoMod incorrectly flag words that are acceptable and commonly used!
Commonly Flagged Words
AutoMod’s Commonly Flagged Words keyword filter comes equipped with three predefined wordlists that provide communities with convenient protection against commonly flagged words. There are three predefined categories of words available: Insults and Slurs, Sexual Content, andSevere Profanity. These wordlists will all share one rule, meaning they’ll all have the same response configured. These lists are maintained by Discord and can help keep conversations in your community consistent with Discord's Community Guidelines.
Exemptions
Both AutoMod’s Commonly Flagged Words filter and custom filters allow for exemptions in the form of roles and channels. Within the Commonly Flagged Words filter, you can also allow for the exemption of words from Discord’s predefined wordlists. Anyone with these defined roles, or sending messages within defined channels or containing keywords from Discord’s wordlists, will not trigger responses from AutoMod.
This is notably useful for allowing moderators to bypass filters, allowing trusted users to send more unrestricted messages, and tailoring the commonly flagged wordlists to your community’s needs. As an example, you could prevent new users from sending Discord invites with a keyword filter of: *discord.gg/*, *discord.com/invites/* and then give an exemption to moderators or users who have a certain role, allowing them to send Discord invites. This could also be used to only allow sharing Discord invites in a specific channel.
Note: Users with the Manage Server and Administrator permissions will always be exempt from all AutoMod filters. Bots and webhooks are also exempt.
Spam Filters
Spam, by definition, is irrelevant or unsolicited messages. AutoMod comes equipped with two spam filters that allow you to flag messages containing mention spam and content spam.
Mention Spam
Mention spam is when users post messages containing excessive mentions for the purpose of disrupting your server and unnecessarily pinging others.
AutoMod’s mention spam filter lets you flag and block messages containing an excessive number of unique @role and @user mentions. You define what is “excessive” by setting a limit on the number of unique mentions that a message may contain, up to 50.
It is recommended to select "Block message" as an AutoMod response when it detects a message containing excessive mentions as this prevents notifications from being sent out to tagged users and roles. This helps prevent your channels from being clogged up by disruptive messages containing mention spam during mass mention attempts and mention raids and saves your members from the annoyance of getting unnecessary notifications and ghost pings.
Spam Content
This filter flags spammy text content that has been widely reported by other users as spam, such as unsolicited messages, free Nitro scams and advertisements, and invite spam.
This filter identifies spam at large by using a model that has been trained by messages that users have reported as spam to Discord. Enabling this filter is an effective way to block out a variety of messages that resemble spammy content reported by users and to identify spammers in your community that should be weeded out. However, this filter isn’t perfect and might not catch all forms of spam, such as DM spam, copy/paste, or repeat messages.
Automatic Responses
You can configure AutoMod’s keyword and spam filters with the following automatic responses when a message is flagged:
Block message
This response will prevent a message containing a keyword or spam from being sent entirely. Users will be notified with an ephemeral message that informs them that the community has blocked the message from being sent when this happens.
Discord’s filters will seamlessly block from being sent entirely all messages containing matching keywords, spam content, and excessive mentions regardless of the volume of messages, making this response especially effective for preventing or de-escalating raids.
Send an alert
This response will send an alert containing who-what-where information of a flagged message to a logging channel of your choice.
This message will preview what the full caught message would’ve looked like, including the full content. It also shows a pair of buttons at the bottom of the message, ⛨ Actions and Report Issues. These action buttons will bring up a user context menu, allowing you to use any permissions you have to kick, ban or time out the member. The message also displays the channel the message was attempted to be sent in and the filter that was triggered by the message. In the future, some auto-moderation bots may be able to detect these messages and action users accordingly.
Time out user
This response will automatically apply a time out penalty to a user, preventing them from interacting in the server for the duration of the penalty. Affected users are unable to send messages, react to messages, join voice channels or video calls during their timeout period. Keep in mind that they are able to see messages being sent during this period.
To remove a timeout penalty, Moderators and Admins can right-click on any offending user’s name to bring up their Profile Context Menu and select “Remove Timeout.”
Recommended Configuration
AutoMod is a very powerful tool that you can set up easily to reduce moderation work and help keep your community's channels and conversations positive 24/7. For example, you may want to use three keyword filters: one to just block messages, one to just send alerts for messages, and one to do both.
Overall, it's recommended to have AutoMod block messages you wouldn't want community members to see, such as slurs and other extreme language. Enabling the “send alerts” responses will allow your moderation team to take action against undesirable messages and the users behind them while preventing the rest of your community from exposure. On the other hand, you may choose to have messages containing other keywords or commonly spammed phrases blocked by AutoMod’s without setting up alerts to prevent undesirable messages from being sent while managing the number of alerts sent to your logs.
You can configure AutoMod’s keyword and spam filters in real-time to prevent and de-escalate raids by adding spammed keywords or adjusting your mention limit in the event of a mention raid.
It's also recommended to have AutoMod send you alerts for more subjective content that requires a closer look from your moderation team, rather than having them being blocked entirely. This will allow your moderation team to investigate flagged messages with additional context to ensure there’s nothing malicious going on. This is useful for keywords that can be commonly misrepresented or sent in a non-malicious context
Verification Level
None - This turns off verification for your community, meaning anyone can join and immediately interact with your community. This is typically not recommended for public communities as anyone with malicious intent can immediately join and be disruptive.
Low - This requires people joining your community to have a verified email, which can help protect your community from bad actors, while keeping everything simple for well-meaning users. This may be an appropriate setting for a smaller community.
Medium - This requires the user to have a verified email address and for their account to be at least 5 minutes old. This further protects your community by introducing a blocker for people creating accounts solely to be disruptive. This may be an appropriate setting for a moderately sized community.
High - This includes the same protections as both medium and low verification levels but also adds a 10 minute barrier between someone joining your community and being able to interact. This can give you and anyone else responsible for keeping things clean in your community time to respond to “raids,” or large numbers of malicious users joining at once. For legitimate users, you can encourage them to do something with this 10 minute time period such as read the rules and familiarize themselves with informational channels to pass the time until the waiting period is over. You may want to consider this setting for large communities.
Highest - This requires a joining user to have a verified phone number in addition to the above requirements. This setting can be bypassed by robust “raiders,” but it takes additional effort. This would be a good setting for a smaller community where security is tantamount or a larger community with custom verification. This requirement is one many normal Discord users won’t satisfy, by choice or inability. It’s worth noting that Discord’s phone verification disallows VoIP numbers to be abused.
Explicit media content filter
Not everyone on the internet is sharing content with the best intentions in mind. Discord provides a robust system to scan images and embeds to help prevent inappropriate images from being posted in your community. There are varying levels of scrutiny to the explicit media content filter which are:
Don’t scan any media content - Nothing sent in your community will go through Discord’s automatic image filter. You may want to consider this setting if you have a a smaller community where only people you trust can post images, videos, etc.
Scan media content from users without a role - This will help prevent new users from filling your community with unsavory imagery. This may be an appropriate setting for a moderately sized community.
Scan media content from all members - This setting helps prevent everyone, regardless of their role, from posting unsavory images. In general, we recommend this setting for ALL larger communities.
Once you’ve decided on the base level of auto-moderation you want for your community, it’s time to look at the extra levels of auto-moderation bots offer. The next few sections are going to detail the ways in which a bot can moderate.
Bot-controlled Auto Moderation
Bots can help prevent the posting of messages containing certain words, phrases, spam, and mentions.
When choosing a bot for auto moderation, you should consider its capabilities for manual moderation (things like managing mutes, alerts, etc.). Find a bot with an infraction/punishment system you and the rest of your moderator team find to be the most appropriate. All of the bots listed in this article have a manual moderation system.
The main and most pivotal forms of auto-moderation are:
Anti-Spam
Text Filters
Anti-Raid
User Filters
Each of these subsets of auto-moderation will be detailed below along with recommended configurations depending on your community.
It’s important your auto-moderation bot(s) of choice are adopting the cutting edge of Discord API features, as this will allow them to provide better capabilities and integrate more powerfully with Discord. Slash commands are especially important as you’re able to configure which commands are usable on which bot on a case by case basis for each slash command. This will allow you to maintain very detailed moderation permissions for your moderation team. Bots that support more recent API features are generally also considered to be more actively developed and thus more reliable in regards to reacting to new threat vectors as well as able to adapt to new features on Discord.
Slash Command Permissions
As mentioned above, one of the more recent features is Slash Commands. Slash commands are configurable per-command, per-role, and per-channel. This allows you to designate moderation commands solely to your moderation team without relying on permissions on the bot’s side to work perfectly. This is relevant because there have been documented examples in the past of permissions being bypassed on a moderation bot’s permission checking, allowing normal users to execute moderation commands.
Anti-Spam
One of the most common forms of auto moderation is anti-spam, a type of filter that can detect and prevent various kinds of spam. Depending on what bot(s) you’re using, this comes with various levels of configurability.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance **Giselle combines these elements into one filter
Anti-spam is integral to running a larger community. There are multiple types of spam a user can send, with some of the most common forms listed in the table above. These types of spam messages are also very typical of raids, especially Fast Messages and Repeated Text. While spam can largely be defined as irrelevant or unsolicited messages, the nature of spam can vary greatly. However, the vast majority of instances involve a user or users sending lots of messages with the same content with the intent of disrupting your community.
There are subsets of this spam that many anti-spam filters will be able to catch. For example, if Mentions, Links, Invites, Emoji or Newline Text are spammed repeatedly in one message, or spammed repeatedly across several messages, they will trigger most Repeated Text and Fast Messages filters. Subset filters are still a good thing for your anti-spam filter to have as you may wish to punish more or less harshly depending on the spam. Notably, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, ten links in five seconds, they will be punished to some degree. This could be ten links in one message, or one link in ten messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly, and a bot can fall behind. There are rate limits in place to stop bots from harming communities that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick, or ban. If you want to protect your community from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to post in your community. Different bots will provide various ways to filter these things.
*Defaults to banning ALL links **Users can bulk-input a YML config ***Only the templates may be used, custom filters cannot be made
A text filter is a must for a well-moderated community. It’s strongly recommended you use a bot that can filter text based on a banlist. A banned words filter can catch links and invites, provided http:// and https:// are added to the word banlist (for all links) or specific full-site URLs to block individual websites. In addition, discord.gg can be added to a banlist to block ALL Discord invites.
A banned words filter is integral to running a public community, especially for Partnered, Community, or Verified servers who have additional content guidelines they must meet.
Before configuring a filter, it’s a good idea to work out what is and is not ok to say in your community, regardless of context. For example, racial slurs are generally unacceptable in almost all communities, regardless of context. Banned word filters often won’t account for context with an explicit banlist. For this reason, it’s also important that a robust filter contains allowlisting options. For example, if you add ‘cat’ to your filter and someone says “catch,” they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your community, as there may be individuals, such as members of the moderation team, who need to discuss the use of banned words. There may also be channels that allow the usage of otherwise banned words. For example, you may decide to allow a serious channel dedicated to discussion of real world issues to discuss slurs or other demeaning language; in this case, channel-based immunity to an otherwise banned word is integral to allowing those conversations.
Link filtering is important to communities where sharing links in “general” chats isn’t allowed, or where there are specific channels dedicated to sharing that content. This can allow a community to remove links with an appropriate reprimand.
Allow/ban-listing and templates for links are also a good idea to have. While many communities will use catch-all filters to make sure links stay in specific channels, some links will always be inherently problematic. Being able to filter specific links is a good feature,with preset filters (like the Google filter provided by YAGPDB) coming in very handy for protecting your user base without requiring intricate setup on your behalf. However, it is recommended you configure a custom filter as a supplement to help ensure specific slurs, words, etc. that break the rules of your community, aren’t being posted.
Invite filtering is equally important in large or public communities where users will attempt to raid, scam, or otherwise assault your community with links with the intention of manipulating your user base or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized instantly and dealt with more harshly. Some bots may also allow by-community white/banlisting allowing you to control which communities are approved to share invites to and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, communities are shared. These communities should be added to an invite allowlist to prevent their deletion.
Built-in suspicious link and file detection
Discord implements a native filter on links and files, though this filter is entirely client-side and doesn’t prevent malicious links or files being sent. It does, however, warn users who attempt to click suspicious links or download suspicious files and prevents known malicious links from being clicked at all. While this doesn’t remove offending content and shouldn’t be relied on as auto-moderation, it does help prevent harm to your members.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your community. Protecting your community from these raids can come in various forms.
*Unconfigurable, triggers raid prevention based on user joins and damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unwanted messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers, depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your community by preventing raiding users from accessing your community in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your community by closing off certain aspects of it either from all new users or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users in the community.
Raid anti-spam is an anti-spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to “Purge” or “Prune”.
Built-in anti-raid
It should be noted that Discord features built-in raid and user bot detection, which helps prevent raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, we do not recommend relying solely on these systems if you run a large community.
User filters
Messages aren’t the only way bad actors can introduce unwanted content to your community. They can also manipulate their Discord username or display name to be offensive or violate rules. Different bots offer different filters to prevent this.
When choosing which bot(s) to use for your auto moderation needs, consider making username filtering a lower priority criteria, as users with malicious usernames can just be nicknamed in order to hide their actual username if necessary.
Specialized Auto Moderation Bots
There are some specialized bots that only cover one specific facet of auto moderation and execute it especially well. These include:
Beemo - Bot raid detection and prevention
This bot detects raids as they happen globally, banning raiders from your community. This is especially notable as it will ban detected raiders from raids in other communities in which the bot is presentas those actors join your community, making it significantly more effective than other anti-raid solutions that only pay attention to your community.
Fish - Malicious link and DM raider detection
Fish is designed to counter scamming links and accounts by targeting patterns in joining users to prevent DM raids (like normal raids, but members are directly messaged instead). These DM raids are typically phishing scams, which Fish also filters by deleting known phishing sites.
Safelink and Crosslink - Link auto-moderation
Both of these bots are highly specialized link and file moderation bots that effectively filter adult sites, scamming sites, and other categories of sites as defined by your moderation team.
Which bot do I use?
When choosing a bot for auto-moderation you should ensure it has an infraction/punishment system you and your moderation team are comfortable with and that its features are what’s best suited for your community. Consider testing out several bots and their compatibility with Discord’s built-in auto-moderation features to find what works best for your server’s needs. You should also keep in mind that the list of bots in this article is not comprehensive - you can consider bots not listed here. The world of Discord moderation bots is vast and fascinating, and we encourage you to do your own research!
For super-large communities (>100,000)
For the largest of communities, it’s recommended you employ everything Discord has to offer. You should use the High or Highest Verification level, all of Discord’s AutoMod keyword filters, and a robust moderation bot like Gearbot or Gaius. You should seriously consider additional bots like Fish, Beemo, and Safelink/Crosslink to aid in keeping your users safe and have detailed Content Moderation filters. At this scale, you should also seriously consider premium, self hosted, or custom moderation bots to meet the unique demands of your community.
For large communities (>10,000)
It’s recommended you use a bot with a robust and diverse toolset, while simultaneously utilizing AutoMod’s commonly flagged word filters. You should use the High Verification level to aid in preventing raids. If raiding isn’t a large concern for your community, Gearbot and Giselle are viable options. Your largest concerns in a community of this size are likely to be preventing spam and inappropriate content; as a result, robust keyword filters are also highly recommended, and user filters are a good bonus. Beemo can be a good match for servers of this size. At this scale, a self hosted, custom, or premium bot may also be a viable option, but such bots aren’t covered in this article.
For midsized communities (>1,000)
It’s recommended you use Fire, Gearbot, Bulbbot, AutoModerator or Giselle. Mee6 and Dyno are also viable options; however, they’re very large bots and have been known to experience outages, which could leave your community unprotected at times. At this community size, you may not be very concerned about raids; as a result, finding a bot with anti-spam and text filters may be sufficient. You may find that AutoMod’s keyword filters and commonly flagged words lists provided by Discord are adequate for your needs. User filters, at this community size, may not be needed, and you may find that a Verification Level of Medium is sufficient.
For smaller communities
If your community is small, the likelihood of malicious users joining to wreak havoc is likely low. As such, you can choose a bot with general moderation features you like the most and use that for auto-moderation. Any of the bots listed in this article should serve this purpose. At this scale,AutoMod’s keyword filters are likely to be adequate. Your Verification Level is largely up to you at this scale depending on where you anticipate member growth coming from, with Medium being default recommended.
Configuring Auto Moderation for listed bots
Mee6
First, make sure Mee6 is in the community you wish to configure it for. Then log into its online dashboard (https://mee6.xyz/dashboard/), navigate to the community, then plugin and enable the ‘Moderator’ plugin. Within the settings of this plugin are all the auto-moderation options.
Dyno
First, make sure Dyno is in the community you wish to configure it for. Then log into its online dashboard (https://dyno.gg/account), navigate to the community, then the ‘Modules’ tab. Within this tab, navigate to “Automod,” where you will find all the auto-moderation options.
Giselle
First, make sure Giselle is in the community you wish to configure it for. Then, look at its documentation (https://docs.gisellebot.com/) for full details on how to configure auto-moderation for your community.
Gaius
First, make sure Gaius is in the community you wish to configure it for. Then, look at its documentation (https://docs.gaiusbot.me/books/gaius/chapter/auto-moderation) for full details on how to configure auto-moderation for your community.
Fire
First, make sure Fire is in the community you wish to configure it for. Then, look at its documentation (https://getfire.bot/commands) for full details on how to configure auto-moderation for your community.
Bulbbot
First, make sure Bulbbot is in the community you wish to configure it for. Then, look at its documentation (https://docs.bulbbot.rocks/getting-started/) for full details on how to configure auto-moderation for your community.
Gearbot
First, make sure Gearbot is in the community you wish to configure it for. Then, look at its documentation (https://gearbot.rocks/docs) for full details on how to configure auto-moderation for your community.