Facebook has recently come under attack for failing to enforce its own guidelines on hate speech and violent imagery.
The website is reacting. At the beginning of May, it reversed a decision not to remove a video which showed a man being beheaded as it did not break the social network’s policy. After that, the pressure of feminist groups, advertisers and media made it recognize in a statement by Marne Levine, Facebook’s vice president of Global Public Policy that they are acting not so quickly and effectively as they would like in this matter, but they will. How? They propose some measures, but it is difficult with so many millions users posting constantly.
Facebook’s mission – says Levine – has always been to make the world more open and connected. We seek to provide a platform where people can share and surface content, messages and ideas freely, while still respecting the rights of others. When people can engage in meaningful conversations and exchanges with their friends, family and communities online, amazingly positive things can happen.
To facilitate this goal, we also work hard to make our platform a safe and respectful place for sharing and connection. This requires us to make difficult decisions and balance concerns about free expression and community respect. We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying)
Later, Ms Levine says that
In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want. In other cases, content that should be removed has not been or has been evaluated using outdated criteria. We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will.
I think that if Facebook has some guidelines on hate speech and violent imagery it’s its duty to enforce them, independent of the pressure of social groups or advertisers. Or if they can’t keep them, renounce to have them and announce that to users. I enjoy using Facebook to keep in touch with my extended family and friends in different countries and I don’t expect to find a video of a man being beheaded or hate speech. I think the owners of the website have the right to establish some guidelines if they want and users have to respect them. If not their content must be removed. Control that with millions posting every day? almost impossible.
They suggest some new measures to try to control it anyway but more focussed on hate speech than in the incident of the violent video of the beheading. Maybe because of the pressure of the social groups and the advertisers in the first topic?:
We will complete our review and update the guidelines that our User Operations team uses to evaluate reports of violations of our Community Standards around hate speech. To ensure that these guidelines reflect best practices, we will solicit feedback from legal experts and others, including representatives of the women’s coalition and other groups that have historically faced discrimination.
We will update the training for the teams that review and evaluate reports of hateful speech or harmful content on Facebook. To ensure that our training is robust, we will work with legal experts and others, including members of the women’s coalition to identify resources or highlight areas of particular concern for inclusion in the training.
We will increase the accountability of the creators of content that does not qualify as actionable hate speech but is cruel or insensitive by insisting that the authors stand behind the content they create. A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humor include his or her authentic identity for the content to remain on Facebook. As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users.
We will establish more formal and direct lines of communications with representatives of groups working in this area, including women’s groups, to assure expedited treatment of content they believe violate our standards. We have invited representatives of the women Everyday Sexism to join the less formal communication channels Facebook has previously established with other groups.
We will encourage the Anti-Defamation League’s Anti-Cyberhate working group and other international working groups that we currently work with on these issues to include representatives of the women’s coalition to identify how to balance considerations of free expression, to undertake research on the effect of online hate speech on the online experiences of members of groups that have historically faced discrimination in society, and to evaluate progress on our collective objectives.
I, as a Journalist, am for freedom of speech, but I don’t think that gives you freedom to deliberately spread lies or call for violence or denigrate human dignity. Precisely because of how I value freedom of speech I think is very important to write with ethics, looking for the truth honestly with a lot of respect for human dignity. That usually gives you a lot of work, it is difficult and not very sensationalistic. It is not the easiest way to get famous, but is the right thing to do and gives you peace of mind.