Is YouTube a training site for terrorists? Gordon Rayner, political editor for the UK Daily Telegraph has discovered that British “counter-terrorism officers secretly recorded an alleged ISIL-inspired terror cell . . . discussing how to use YouTube to plot a van and knife attack in London.”
In June 2017, Ruthie Blum at Gatestone Institute asserts that both “YouTube and Google, are effectively being accessories to murder. They are also inviting class-action lawsuits from families and individuals victimized by terrorism. They need to be held criminally liable for aiding and abetting mass murder.” And while Google announced that it would “fight terrorism online,” Blum asserts that Google and YouTube are “getting away with promoting jihad for a profit while disingenuously hiding behind the banner of free speech.”
In 2015, The Middle East Media Research Institute (MEMRI) “researched and flagged YouTube videos of support for jihadi fighters and ‘martyrs’ and ‘martyrdom,’ to test the platform’s ‘Promotes Terrorism’ flaggng feature.” As a result of the research, “by mid March 2017, major companies began halting or reducing advertising deals with YouTube owner Google because Google had allowed their brands to become intertwined with terrorist and extremist content on YouTube. These companies have, so far, included AT&T, Verizon, Johnson & Johnson, the car rental company Enterprise Holdings, and drug manufacturer GSK. According to media reports, ordinary ads have been appearing alongside user-uploaded YouTube videos promoting hatred and extremism.” Nonetheless, Steve Stalinsky, Executive Director of MEMRI explains that “You Tube’s removal of jihadi content is spotty” and inconsistent. In fact, “. . . , 69 out of 115 videos remain active, highlighting the failure of YouTube’s flagging system.”
In 2016, npr.org asserted that “Zuckerberg didn’t sign up to head a media company — . . . that has to make editorial judgments.” Thus, “[h]e and his team have made a very complex set of contradictory rules — a bias toward restricted speech for regular users, and toward free speech for ‘news’ (real or fake).”
At Foreign Policy, author Nanjala Nyabola in October 2016 maintained that “. . . there’s a dark side to [Facebook’s] Free Basics that has the potential to do more harm than good [.] The app is . . . a version of the internet that gives Facebook — and by extension the corporations and governments that partner with Facebook — total control over what its users can access.” It is important to note that “in many African countries, traditional media has been co-opted by the state [.]” Thus, Nyabola asserts that “this record of collaborating with governments should make us wary of Free Basics. The app is only worth the gamble if one believes that governments where it’s been rolled out have the best interests of their citizens at heart — a presumption that is unwarranted in much of Africa.”
In November 2016 Aaron M. Renn wrote that “[i]t’s long been known that social-media platforms like Facebook, Twitter, and YouTube (owned by Google) delete significant amounts of user-posted content. Some of what gets removed is in clear violation of legitimate standards governing pornography and pirated content. But a lot of what gets pulled down is neither offensive nor illegal. Rather, it is content whose message these platforms disagree with.”