If it is ever “toxic” to deem ISIS a terrorist organization, then — regardless of whether that is the result of human bias or an under-developed algorithm — the potential for abuse, and for widespread censorship, will always exist. The problem lies with the very concept of the idea. Why does Silicon Valley believe it should decide what is valid speech and what is not?
Conservative news, it seems, is considered fake news. Liberals should oppose this dogma before their own news comes under attack. Again, the most serious problem with attempting to eliminate hate speech, fake news or terrorist content by censorship is not about the efficacy of the censorship; it is the very premise that is dangerous.
Under the guidance of faulty algorithms or prejudiced Silicon Valley programmers, when the New York Times starts to delete or automatically hide comments that criticize extremist clerics, or Facebook designates articles by anti-Islamist activists as “fake news,” Islamists will prosper and moderate Muslims will suffer.
Google’s latest project is an application called Perspective, which, as Wired reports, brings the tech company “a step closer to its goal of helping to foster troll-free discussion online, and filtering out the abusive comments that silence vulnerable voices.” In other words, Google is teaching computers how to censor.
If Google’s plans are not quite Orwellian enough for you, the practical results are rather more frightening. Released in February, Perspective’s partners include the New York Times, the Guardian, Wikipedia and the Economist. Google, whose motto is “Do the Right Thing,” is aiming its bowdlerism at public comment sections on newspaper websites, but the potential is far broader.
Perspective works by identifying the “toxicity level” of comments published online. Google states that Perspective will enable companies to “sort comments more effectively, or allow readers to more easily find relevant information.” Perspective’s demonstration website currently allows anyone to measure the “toxicity” of a word or phrase, according to its algorithm. What, then, constitutes a “toxic” comment?
The organization with which I work, the Middle East Forum, studies Islamism. We work to tackle the threat posed by both violent and non-violent Islamism, assisted by our Muslim allies. We believe that radical Islam is the problem and moderate Islam is the solution.
Perspective does not look fondly at our work:
Google’s Perspective application, which is being used by major media outlets to identify the “toxicity level” of comments published online, has much potential for abuse and widespread censorship.
No reasonable person could claim this is hate speech. But the problem does not just extend to opinions. Even factual statements are deemed to have a high rate of “toxicity.” Google considers the statement “ISIS is a terrorist group” to have an 87% chance of being “perceived as toxic.”
Or 92% “toxicity” for stating the publicly-declared objective of the terrorist group, Hamas:
Google is quick to remind us that we may disagree with the result. It explains that, “It’s still early days and we will get a lot of things wrong.” The Perspective website even offers a “Seem Wrong?” button to provide feedback.
These disclaimers, however, are very much beside the point. If it is ever “toxic” to deem ISIS a terrorist organization, then — regardless of whether that figure is the result of human bias or an under-developed algorithm — the potential for abuse, and for widespread censorship, will always exist.
The problem lies with the very concept of the idea. Why does Silicon Valley believe it should decide what is valid speech and what is not?