ECMI Minorities Blog. Regulating Hate Speech Online for Minorities: “Regulate First, Ask Questions Later”?

2019-11-12

ECMI Minorities Blog

Regulating Hate Speech Online for Minorities: “Regulate First, Ask Questions Later”? 

 Author: Dr. Kyriaki Topidi 

Nearly a third of the world’s population is active on Facebook alone.  At the same time, there is intense debate over how to regulate expressions of hate within social media. Caught between a US regulatory logic which finds any idea of (online) speech regulation as problematic and a European one, which is beginning to impose duties of self-regulation to private intermediaries, it is presently unclear how governments should be positioning themselves vis-à-vis online (hateful) speech. 

Two essential questions emerge as particularly uncertain: first, what content should be regulated and second, how should decisions regarding regulation into the online public sphere be made. Beyond these two important legal points of contention, the broader issue of accountability of online regulators through democratic channels depends largely on individual national social and political norms regarding hate speech (e.g. consider Germany’s ban on public Holocaust denial). 

Against the rapid digitalization of expression and opinions, noticeable gaps in online accountability have surfaced not only as a concern regarding the enforcement of conflicting views on internet freedom but also as an opportunity towards the polarization of political opinion, the production of fake news and civil unrest. If one adds to this picture the fact that the publication of hate speech online leads to dissemination across borders, it becomes easy to grasp the degree of complexity of online regulation. 

The example of the German law  NetzDG(Netzwerkdurchsetzungsgesetz vom 1 September 2017)  is illustrative and widely discussed: by putting the responsibility on the internet platforms to delete any illegal content, (although it is individual users who write and post on these platforms) the German law expects private companies to abide by the law in the light of their considerable influence over society, in a scheme resembling a right-versus-duty analogy. This prototype appears to be paving the way in Europe for stricter regulation of online expressions at the expense of private companies. 

In more technical terms, user behaviour and preferences online are targeted by algorithms used by tech companies, which while attempting to maximize the engagement of users may in fact end up promoting extreme content. At the same time,  any attempt at content moderation rules devised by the major tech companies remains opaque and inconsistent in its application: the example of the ProPublica investigation on Facebook rules showed that the platform’s hate-speech rules tend to favour elites and governments as opposed to grassroots activists and racial minorities. 

But how does this affect minorities? Faced with raising hate speech against Muslim migrants and recent refugees coupled with a raise in anti-Semitic incidents in countries such as France, Germany and the UK, it becomes less surprising that the European Commission pressured major tech companies to agree to a Code of Conduct pledging to review posts flagged by users. Interestingly, three years after such an agreement, in February 2019 the European Commission found that social media platforms were already meeting the task assigned to them in three quarters of cases. In its report, the Commission revealed, however, that the most significant grounds for online hatred were based on xenophobia (17%), anti-gypsyism (12.2%), ethnic origin (6.2%), national origin (5.1%) anti-Semitism (10.1%), anti-Muslim sentiments (13%) and religion (5.2 %). Racist and xenophobic hatred against ethnic and cultural minorities appears to be thus pervasive on the web. Such behaviour not only intensifies prejudice and stereotypes but also affects the mental health of the targeted individuals, who are already facing multiple challenges in their societies. As, crucially, online hate speech increases the likelihood of hate crimes while pushing towards the ‘normalization’ of hate through social media.  

The ‘chain’ of interaction is now becoming clearer: stereotypes and misrepresentation online are getting normalized, leading to renewed discrimination because of ethnic and/or cultural traits in conjunction with hate speech, both online as well as offline. Ultimately, this normalization has been scientifically connected to the execution of hate crimes. Within this ‘pyramid’ of hate, ethnic and national minority groups remain relentlessly affected. 

Within such a context, the contribution of internet intermediaries in the regulation of the online public sphere (i.e. the tech companies) cannot be underestimated towards the growth of the diversity of expressions online but also towards the protection of groups particularly targeted by extreme forms of such expression. This is because tech companies act as gateways to information but as importantly as facilitators of the exercise of the right to freedom of expression. Leaving this task of hate speech regulation to private companies, however, jeopardises the legal determination of an online statement as illegal. Pushing the argument further, within constitutional states, the claim is that it belongs to courts rather than private companies to decide what is lawful and what is not. 

States are still responsible for ensuring that the internet remains an open public space, and yet a number of important decisions when deciding what is legal and what is not are de facto taken by private tech companies on a daily basis. The general guiding legal principle applicable in this context stipulates that liability for third party (i.e. individual user) content cannot be shifted to internet intermediaries, if the latter remove or disable access to any material that they have knowledge to be illegal. 

At the outset already, this division of ‘tasks’ has proven to be inherently problematic, including from a human rights perspective: technological methods used to monitor hate speech are imperfect, leading to over-blocking of content and violations of the freedom of expression. The impact of such violations is particularly felt by minority groups/communities which are both victims of hate speech, as addressees, but also as individuals-users wishing to express their views online on matters of concern to them. 

In this practical regulatory arrangement where the State and private entities share the task of oversight, the challenge on how to legally formalise the responsibilities of private actors becomes key. Legal certainty and foreseeability are two essential features of legislative intervention required, even in contexts where the private sector undertakes public functions. Yet tech companies are profit-maximising entities without accountability, who under threat of penalties from States are inclined to undertake, on a precautionary basis, unnecessary and often excessive censorship. Algorithms and other automated systems in that respect are not autonomous tools that can allow comprehensively for decision-making on what content may be objectionable or unlawful in any given set of circumstances. Their side-effects, including indirect indiscrimination are considerable.

Whether based on public policy focused arguments (e.g. security concerns), or on those of cost-efficiency (e.g. the State delegates to the private sector to minimise costs), it is still unclear to what extent online platforms are actually enforcing their own biased discretionary content policies. This is especially so given the rapidly rising requests for takedowns, where major online platforms inevitably opt for automation, rather than human review (e.g. Google alone received 2 million such requests each day in 2016).  

The 2016 Code of Conduct agreement reached at the initiative of the European Commission with four such major platforms to enforce voluntarily EU law-based hate speech policies, may be thus more controversial than it appears. Self-regulation bears the risk of a lack of transparency, accountability and democratic legitimacy. It could also be interpreted as a ‘retaliation’ move against the perception that American tech companies have been trying to impose their “First Amendment” values where freedom of speech trumps all other interests. Before celebrating, however, the 75 per cent takedown rate for hate speech notices based on the EU Code of Conduct, it might be worth checking whether the amount of speech taken down was actually illegal material. 

The very recent Facebook case decided by the CJEU on the removal of hate speech addressed through Facebook to the then leader of Austria’s Green Party, echoed the 2017 European Commission Communication on Illegal content calling for “fully automated deletion or suspension of content” for all materials that are “xenophobic and racist speech that publicly incites hatred.” The judgment requires Facebook to undertake tougher controls as it can be ordered by EU member states to remove defamatory material worldwide. In practice, the CJEU’s finding may allow a court in one Member State to issue an order that could be used to remove posts across the globe, even though what is considered unlawful may vary across different national jurisdictions. The decision confirms the growing entanglement between State and non-State entities in combatting hateful expressions online. 

Prior to this very recent case, in Delfi v Estonia (2015), the ECtHR had found that the State could impose on a news site the duty to constantly review and remove internet users’ comments. It could thus limit their freedom of expression when comments constituted hate speech, in light of the importance of the public policy interest to regulate hate speech under European law. 

In sum, the cooperation of States with a wide range of actors, including private businesses engaged in the provision of online services, does not absolve governments from playing a determining role in guaranteeing pluralistic online public space. Even if the generation of online content now belongs as much to individuals as it does to social media networks, the available evidence seems to suggest that the needs of members of ethnic and cultural/religious minority groups are not fully balanced out or at least taken into account when developing digital policies on freedom of expression, despite the overwhelming degree to which they are affected. 

*please note that the ECMI Minorities Blog replaces the former Issue Brief format

Back to overview

ECMI Founders