EU: Politicizing the Internet by Judith Bergman
https://www.gatestoneinstitute.org/13042/eu-internet-censorship
- Even before such EU-wide legislation, similar ostensible “anti-terror legislation” in France, for example, is being used as a political tool against political opponents and to limit unwanted free speech.
- In France, simply spreading information about ISIS atrocities is now considered “incitement to terrorism”. It is this kind of legislation, it seems, that the European Commission now wishes to impose on all of the European Union.
- Social media giants — Facebook, Twitter, YouTube, Microsoft, Google+ and Instagram — act as voluntary censors on behalf of the European Union.
- The European Commission states that it is specifically interested in funding projects that focus on the “development of technology and innovative web tools preventing and countering illegal hate speech online and supporting data collection”, and studies that analyze “the spread of racist and xenophobic hate speech in different Member States…”
In March, the European Commission — the unelected executive branch of the European Union — told social media companies to remove illegal online terrorist content within an hour — or risk facing EU-wide legislation on the topic. This ultimatum was part of a new set of recommendations that applies to all forms of supposedly “illegal content” online. This content ranges “from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement.”
While the one-hour ultimatum was ostensibly only about terrorist content, the following is how the European Commission presented the new recommendations at the time:
“… The Commission has taken a number of actions to protect Europeans online – be it from terrorist content, illegal hate speech or fake news… we are continuously looking into ways we can improve our fight against illegal content online. Illegal content means any information which is not in compliance with Union law or the law of a Member State, such as content inciting people to terrorism, racist or xenophobic, illegal hate speech, child sexual exploitation… What is illegal offline is also illegal online”.
“Illegal hate speech”, is then broadly defined by the European Commission as “incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin”.
The EU has now decided that these “voluntary efforts” to remove terrorist content within an hour on the part of the social media giants are not enough: that legislation must be introduced. According to the European Commission’s recent press release:
“The Commission has already been working on a voluntary basis with a number of key stakeholders – including online platforms, Member States and Europol – under the EU Internet Forum in order to limit the presence of terrorist content online. In March, the Commission recommended a number of actions to be taken by companies and Member States to further step up this work. Whilst these efforts have brought positive results, overall progress has not been sufficient”.
According to the press release, the new rules will include draconian fines issued to internet companies who fail to live up to the new legislation:
“Member States will have to put in place effective, proportionate and dissuasive penalties for not complying with orders to remove online terrorist content. In the event of systematic failures to remove such content following removal orders, a service provider could face financial penalties of up to 4% of its global turnover for the last business year”.
Such astronomical penalties are likely to ensure that no internet company will run any risks and will therefore self-censor material “just in case”.
According to the European Commission press release, the rules will require that service providers “take proactive measures – such as the use of new tools – to better protect their platforms and their users from terrorist abuse”. The rules will also require increased cooperation between hosting service providers, and Europol and Member States, with the stipulation that Member states “designate points of contact reachable 24/7 to facilitate the follow up to removal orders and referrals”, as well as the establishment of:
“…effective complaint mechanisms that all service providers will have to put in place. Where content has been removed unjustifiably, the service provider will be required to reinstate it as soon as possible. Effective judicial remedies will also be provided by national authorities and platforms and content providers will have the right to challenge a removal order. For platforms making use of automated detection tools, human oversight and verification should be in place to prevent erroneous removals”.
It is hard to see why anyone would believe that there will be effective judicial remedies and that erroneously removed content will be reinstated. Even before such EU-wide legislation, similar ostensible “anti-terror legislation” in France, for example, is being used as a political tool against political opponents and to limit unwanted free speech. Marine Le Pen, leader of France’s Front National, was charged earlier this year for tweeting images in 2015 of ISIS atrocities, including the beheading of American journalist James Foley and a photo of a man being burned by ISIS in a cage. She faces charges of circulating “violent messages that incite terrorism or pornography or seriously harm human dignity”, and that can be viewed by a minor. The purported crime is punishable by up to three years in prison and a fine of €75,000 ($88,000). Le Pen posted the pictures a few weeks after the Paris terror attacks in November 2015, in which 130 people were killed, and the text she wrote to accompany the images was “Daesh is this!” In France, then, simply spreading information about ISIS atrocities is now considered “incitement to terrorism”. It is this kind of legislation, it seems, that the European Commission now wishes to impose on all of the EU.
The decision to enact legislation in this area was taken at the June 2018 European Council meeting – a gathering of all the EU’s heads of state – in which the Council welcomed “the intention of the Commission to present a legislative proposal to improve the detection and removal of content that incites hatred and to commit terrorist acts”. It sounds, however, as if the EU is planning to legislate about a lot more than just “terrorism”.
In May 2016, the European Commission and Facebook, Twitter, YouTube, and Microsoft, agreed on a “Code of Conduct on countering illegal online hate speech online” (Google+ and Instagram also joined the Code of Conduct in January 2018). The Code of Conduct commits the social media companies to review and remove, within 24 hours, content that is deemed to be “illegal hate speech”. According to the Code of Conduct, when companies receive a request to remove content, they must “assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia…” In other words, the social media giants act as voluntary censors on behalf of the European Union.
The European Council’s welcoming of a legislative proposal from the European Commission on “improving the detection and removal of content that incites hatred” certainly sounds as if the EU plans to put the Code of Conduct into legislation, as well.
At the EU Salzburg Informal Summit in September, EU member states agreed to, “step up the fight against all forms of cyber crime, manipulations and disinformation”. Heads of member states were furthermore invited, “to discuss what they expect from the Union when it comes to… preventing the dissemination of terrorist content online” and “striking the right balance between effectively combating disinformation and illegal cyber activities and safeguarding fundamental rights such as the freedom of expression”.
At the same time, however, the European Commission, under its Research and Innovation Program, has a call out for research proposals on how “to monitor, prevent and counter hate speech online,” with a submission deadline in October.
In the call for proposals, the Commission says that it is “committed to curb the trends of online hate speech in Europe” and underlines that “proposals building on the activities relating to the implementation of the Code of Conduct on countering hate speech online are of particular interest”.
The Commission states that it is specifically interested in funding projects that focus on the, “development of technology and innovative web tools preventing and countering illegal hate speech online and supporting data collection”; studies that analyze “the spread of racist and xenophobic hate speech in different Member States, including the source and structures of groups generating and spreading such content…”; and projects that develop and disseminate “online narratives promoting EU values, tolerance and respect to EU fundamental rights and fact checking activities enhancing critical thinking and awareness about accuracy of information” as well as activities “aimed at training stakeholders on EU and national legal framework criminalising hate speech online”.[1] One just wonders which member states and what “hate speech” will be held accountable — and which not.
The EU seems fixated, at least for the internet, on killing free speech.
Judith Bergman is a columnist, lawyer and political analyst.
[1] The European Commission writes in its call that it would like the funded projects to have the following results:
- Curbing increasing trends of illegal hate speech on the Internet and contributing to better understanding how social media is used to recruit followers to the hate speech narrative and ideas;
- Improving data recording and establishment of trends, including on the chilling effects of illegal hate speech online, including when addressed to key democracy players, such as journalists;
- Strengthening cooperation between national authorities, civil society organisations and Internet companies, in the area of preventing and countering hate speech online;
- Empowering civil society organisations and grass-root movements in their activities countering hate speech online and in the development of effective counter-narratives;
- Increasing awareness and media literacy of the general public on racist and xenophobic online hate speech and boosting public perception of the issue.
© 2018 Gatestone Institute. All rights reserved. The articles printed here do not necessarily reflect the views of the Editors or of Gatestone Institute. No part of the Gatestone website or any of its contents may be reproduced, copied or modified, without the prior written consent of Gatestone Institute.
Comments are closed.