Social media companies are stepping up efforts to combat online propaganda and recruiting by Islamic terrorists, but the companies are doing it quietly to avoid the perception that they are helping the authorities police the web.
On Friday, one of the companies said it took down a profile that the company believed belonged to San Bernardino shooter Tashfeen Malik, who with her husband is accused of killing 14 people in a mass shooting that the FBI is investigating as an “act of terrorism.”
Just a day earlier, the French prime minister and European Commission officials met with social media companies to demand faster action on what the commission called “online terrorism incitement and hate speech.”
The companies described their policies as straightforward: they ban certain types of content in accordance with their own terms of service, and require court orders to remove or block anything beyond that. Anyone can report, or flag, content for review and possible removal.
But the truth is far more subtle and complicated. According to former employees, the companies all worry that if they go public about their true level of cooperation with Western law enforcement agencies, they will face endless demands for similar action from countries around the world.
They also fret about being perceived by consumers as tools of the government. Worse, if the companies spell out exactly how their screening works, they run the risk that technologically savvy terrorists will learn more about how to beat their systems.
“If they knew what magic sauce went into pushing content into the newsfeed, spammers or whoever would take advantage of that,” said a security expert who had worked at two social media companies and asked not to be identified because of the sensitivity of the issue.
One of the most significant yet least understood aspects of the propaganda issue is the range of ways in which social media companies deal with government officials.
Three leading social media companies say they do not treat government complaints differently from citizens’ complaints, unless the government obtains a court order. The trio are among a growing number that publish regular transparency reports summarizing the number of formal requests from officials about content on their sites.
But there are “workarounds,” according to former employees, activists and government officials.
A key one is for officials or their allies to complain that a threat, hate speech or celebration of violence violates the company’s terms of service, rather than any law. Such content can be taken down within hours or minutes, and without the paper trail that would go with a court order.
Some well-organized online activists have also had success getting social media sites to remove content.
A French-speaking activist said he helped get rid of thousands of videos by spreading links of clear cases of policy violations and enlisting other volunteers to report them.
“The more it gets reported, the more it will get reviewed quickly and treated as an urgent case,” he said in a message to Reuters.
Nicole Wong, who previously served as the White House’s deputy chief technology officer, said tech companies would be reluctant to create a database of jihadists videos, even if it could be kept current enough to be relevant, for fear that repressive governments would demand such set-ups to pre-screen any content they do not like.
“Technology companies are rightfully cautious because they are global players, and if they build it for one purpose they don’t get to say it can’t be used for anything else,” said Wong.
“If you build it, they will come … it will also be used in China to stop dissidents.”