[Community] How Should Community Managers Handle Potentially Offensive Content?
These “grey area” cases are especially difficult to handle.
If handled improperly, they have the potential to become a flashpoint that can cause conflict within the community.
In worse cases, they can hijack the community’s focus away from its true purpose. They can also introduce such acrimony that the community is destroyed completely.
So, what is the best way to handle these grey areas? Answering that question requires first thinking critically about the ramifications of both courses of action.
The Pitfalls Of Being Too Quick To Censor
Let’s start by looking at the pitfalls of regulating any grey areas out of the community as soon as they crop up.
This is the most conservative approach, and one taken by many community managers who believe it is the safest.
However, digging a little bit deeper suggests that while that might seem intuitive, it’s not always necessarily true.
For starters, community moderators are human and risk showing bias. Censorship that is based on perception – in this case, that a symbol “might” be offensive – does not necessarily line up with the reality. When faced with a grey area, moderators fall back on their personal views about the level of offense something warrants, a subjective assessment by definition.
Healthy communities allow free expression within the limits, but those limits must be objective or they risk becoming part of the discussion instead of facilitating it. Community managers should take this to heart, and err towards letting the community police itself unless there is an obvious breach of conduct.
Proactively making the decision to allow moderator subjectivity into a community, if noticed, risks creating a needless controversy.
Some community members who weren’t offended by the symbol – and would not have been offended by its removal based on others’ requests – might take offense to the perception that the community manager herself is interjecting her own biases.
The bottom line is that being too quick to censor a grey area risks making the conversation about the community manager and shattering the sense of an open discussion. This should be avoided at all costs.
Why It’s a Mistake To Allow Grey-Area Content To Stay Up
While there are many good reasons for a community manager to let the members police their own community, that decision doesn’t come without its own serious risks attached.
For starters, any organization’s community is ultimately a reflection of the organization itself, and the company will very often be held responsible for what goes on within the community. Being proactive about eliminating grey areas that might cause offense is a legitimate and important move for a company or group that is conscious about preventing possible image problems down the road.
Beyond the big-picture branding concerns is simply the preservation of the business ROI presented by having a happy and unified community.
Just like some members might take offense to a proactive removal of content, others might take offense to its lack of proactive removal.
The longer grey-area content is left up, the more time there is for someone to see it and turn its presence into divisive issues that turns the space into an angry and argumentative one.
Know Your Community & Know Thyself
Unfortunately, in the end there really isn’t any one “correct” way to handle the appearance of content that falls in a grey area when it comes to offending people. There are both risks and reward to removing it immediately and letting it stay up until someone does say something.
Only the community manager has a true understanding of the relationship between the community and the organization. Understanding the positives and negatives of each approach while taking these situations on a case-by-case basis gives a community manager the flexibility to take both into account, and make the correct decision on the right thing to do for their own community and organization.