Forum Discussion
VasilMichev
Jun 03, 2024MVP
Spam
Folks, there has been an increase in spam post recently, can you please adjust the protection settings and start blocking repeated offenders? Here are some examples from today:
7 Best OST to PS...
Allen
Community Manager
Jun 17, 2024Thanks VasilMichev, EricStarker, and Deleted for your contribution to this thread.
Spam management is always a really difficult area of community management to get right and we will probably never get it 100% right. The Microsoft Tech Community replies heavily both on automated spam detection and user reporting of inappropriate content.
First what are the tools available to us to manage detect, remove and deter spam posting:
Like so many things in the Microsoft Tech Community there are multiple levels of controls to help us and this starts right at the beginning with the Microsoft account used to register with the Microsoft Tech Community. Microsoft already has a suite of advanced systems to detect malicious creation of Microsoft accounts and use of them that is against our Terms of Use, code of conduct. Many spam attacks get stopped at this stage, but not all - clearly.
Next up we have post flood rules, which users who are under a certain rank are restricted from posting x number of times in x minutes. With the number of minutes increasing logarithmically with the number of posts attempted.
Keyword matching allows us to prevent certain words, combination of works and regex of characters from being used throughout the community, we can define different words for a post, a PM or even a user name. Although helpful, this is a bit of a blunt instrument as usually - especially for the serious spammers - they will update and change the keywords as quickly as we can add a new word to them.
The platform has an automated spam quarantine system which automatically analyses posts from multiple communities on the same platform across the world and works to flag posts that are potentially suspicious. This is, largely, effective at stopping many spam posts, indeed for the month of May it marked around 10% of all posts made as spam and of them only 7% (or 0.07% of the total) were later found not to be spam. Of course, like all machine learning systems, it takes time to detect a new type of spam and, while it learns, some extra spam gets through.
We also encourage members to tell us when they have found a post that is likely harmful or spam and off topic, through the report abuse option. When you submit a post via this method a member of our team will review the post and decide, what, if any, action is required. Indeed we will shortly be changing this system allow anyone to report abuse, even if they are not a member. Obviously there will be safe guards in place to ensure that posts are not maliciously marked as spam.
We have our moderators, in the form of employees / MVPs they can mark a post as spam immediately and it will be removed from public access. Our team will then sample some of these posts to ensure fair and appropriate use of the spam management, inline with our terms of use / privacy policies.
Ultimately, we can also ban users via a multitude of criteria, where they have clearly broken our Terms of Use, however this is a last resort and decision we don't take lightly. Usually where there is a clear attempt to spam or deceive users.
Much like Deleted said we don't talk to much about what each of these rules are because that would make it easier to circumvent them and they do, on the whole, work well. There are times where it picks up and we either have to do more to catch them, or we have to take extra steps.
Of course this flip side of this is, if we get it wrong then to many peoples posts get marked as spam and then they complain when it takes a day or so for their post to reviewed and allowed through. This is the fine line we need to walk to get this right, and in all honesty we will never get it 100% right, 100% of the time.
We are also open to exploring new ideas to tackle spam on the platform.