How ASKfm moderation team protect users in 2021: big community security report

The safety and environmental friendliness in communication on the Internet between users is the task and goal for every community moderator and modern social network. The ASKfm team knows it like no other and does not pay attention to aspects.

 

The user today will no longer waste time and resources on the service in which there is no moderation, the content is not censored and there are no rules. Security, control and compliance. These are the factors that affect user behavior, the number of audiences for the service.

 

Thus, the ASKfM team has prepared a report on the work on user security for 2021.

 

So, in 2021 we have expanded the potential of the team, rechecked additional 13 million old photos, added a new group of black-list web links to ban pirate websites. And at the end of 2021, we started rechecking old text questions created by our audience. Soon there will be a launch of new functionality, where this previously rechecked content will be displayed. Stay tuned!

 

How moderation content works

 

In terms of numbers, for 2021, the moderation team checked 78.52% more profile reports compared to 2020, received and moderated 661,237 profile reports. 

 

Also, we made communications with users much easier. It means that every user can contact our support team via the contact form or email directly to receive an explanation why their profile was banned. In this year our support team received about 500 requests from banned users, and about 200 asked to unban their profiles.

 

At the top in banned reasons was: bot, hacked and porn. You can see this in our diagram. 

 

“Also, in case we face any global crisis like social or political conflict, we analyze risks for our audience and add potentially risky hashtags or word expressions in the pattern list to monitor our audience’s mood and tendencies. It helps us prevent the spread of dangerous, racist, intolerant content”, - explains ASKfm moderation team. 

 

How moderation controls content and protects users? In our work, we use hash lists to define suspicious media content. Hash lists hit 368,611 times in 2021, this is 61.94% more than in 2020. You can see it on this diagram:

https://lh5.googleusercontent.com/T6yDQeWxSw5D9t-IocngiNvOH4uAnZUffIRCXkGRcqUh0tAGxJFxuXsjEDYo7nOfYsfysSx-j_OsGhCXMdSPWfkYxRcJPlXj21BAVPGLVcqGRvX-yEobl51vsqDFAwfL_H5xNXMr

 

Also, we worked within external cooperation. In 2021 our moderation team received 17 requests to take measures regarding malicious content or at the request of the copyright holder. On the other side, we reported 58 cases with the top priority threats to Law Enforcement. We worked with such problems like SCUM, extremism, self-harm and others.

 

Therefore, we understand that the world is changing, various crisis events are taking place, and new risks appear for people. We stay sharp, always ready to apply a new approach in order to become the best for our audience under modern trends.

 

Viewed : 2423