Liability of the internet intermediary for hate speech: how far should it go?
Recommendations for companies
Companies have for too long avoided human rights law as a guide to their
rules and rule–making, notwithstanding the extensive impacts they have on the human rights of their users and the public. In addition to the principles adopted in earlier reports and in keeping with the Guiding Principles on Business and
Human Rights, all companies in the ICT sector should:
(a) Evaluate how their products and services affect the human rights of
their users and the public, through periodic and publicly available human rights impact assessments;
(b) Adopt content policies that tie their hate speech rules directly to
international human rights law, indicating that the rules will be enforced
according to the standards of international human rights law, including the relevant United Nations treaties and interpretations of the treaty bodies and special procedure mandate holders and other experts, including the Rabat Plan of Action;
(c) Define the category of content that they consider to be hate speech
with reasoned explanations for users and the public and approaches that are consistent across jurisdictions;
(d) Ensure that any enforcement of hate speech rules involves an
evaluation of context and the harm that the content imposes on users and the public, including by ensuring that any use of automation or artificial intelligence tools involve human–in–the–loop;
(e) Ensure that contextual analysis involves communities most affected by
content identified as hate speech and that communities are involved in
identifying the most effective tools to address harms caused on the platforms;
(f) As part of an overall effort to address hate speech, develop tools that
promote individual autonomy, security and free expression, and involve
de–amplification, de–monetization, education, counter–speech, reporting and training as alternatives, when appropriate, to the banning of accounts and the removal of content