Blighty blockheads bashed —

UK at serious risk of over-blocking content online, human rights watchdog warns

Little legal accountability for content removal by Internet Watch Foundation, says report.

UK at serious risk of over-blocking content online, human rights watchdog warns

The UK is at serious risk of over-blocking Web content, the Council of Europe has warned in a scathing report.

Shortcomings in how the UK manages the reporting and removal of illegal content—such as the promotion of terrorism, child abuse material, or hate speech—were highlighted in a study, published this week, which was carried out by the Swiss institute of comparative law on behalf of the CoE.

The 32-page report also concluded that some British practices may be in breach of the case law of the European Court of Human Rights, and that the current framework seems more concerned with protecting ISPs from liability, than the general public’s freedom of expression.

“Governments have an obligation to combat the promotion of terrorism, child abuse material, hate speech and other illegal content online. However, I am concerned that some states are not clearly defining what constitutes illegal content. Decisions are often delegated to authorities who are given a wide margin for interpreting content, potentially to the detriment of freedom of expression”, said CoE secretary general, Thorbjørn Jagland.

Counter-terrorism laws, in particular, raised serious concerns, he added. In many cases these laws permit blocking, filtering, or removal of content on grounds that they are formulated in vague or imprecise terms such as “extremism” or “terrorist propaganda.”

Blocking, filtering, and take-down of illegal content in the UK is primarily derived from private regulation through voluntary cooperation by ISPs.

“The difficulty with such self-regulatory measures is that it leaves the private body to decide what standards apply and make a determination about the content. If the social network or search engine is very responsive to complaints, it potentially takes down harmless and lawful material simply because someone objects to it,” said Jacob Rowbottom, associate law professor at the University of Oxford.

The study singled out the Internet Watch Foundation whose job it is to police online child abuse material.

The IWF has existed in some form since 1996, but is not a government body or law enforcement agency, but instead, a registered charity, funded by the European Union and the wider online industry, including big players such as Google and Microsoft.

Although the report noted that “the IWF has taken a number of steps to better ensure that its operations are transparent and proportionate, in the absence of legal safeguards against over-blocking, the threshold for the kind of material which may be subjected to removal is therefore much lower than that which might otherwise be set out in law.” It continued:

The blacklist operated by the IWF effectively amounts to censorship. Not only are the blacklist and notices sent to members of the IWF kept secret, but there is no requirement to notify website owners when their site has been added to the blacklist.

Even where statutory rules do exist with respect to notice and take-down procedures (namely, the Terrorism Act 2006 and the Defamation (Operators of Websites) Regulations 2013), the provisions are not so concerned with safeguards for the protection of freedom of expression, as with offering an exemption from liability for ISPs.

Christopher Yvon, UK permanent representative to the CoE—which champions human rights and the rule of law—dismissed the report, however.

He said that the UK “has a strong independent media and a democratic political system which combine to ensure that there are no government restrictions on access to the Internet,” and—far from reducing the role of corporate entities—“the government would welcome industry being further incentivised to take more responsibility for the content on their networks.”

He pointed out the powers to take down terrorist-related content available under the 2006 Terrorism Act had never been used. “All removals of terrorist-related content are achieved through voluntary means with communication service providers (CSPs) for breaching their own terms and conditions,” he added.

The CoE is an international human rights watchdog incorporating 47 countries—not to be confused with the EU's European Council.

Ars sought comment from the Internet Watch Foundation on this story, however, it hadn't got back to us at time of publication.

Update

The IWF's policy and public affairs director Kristof Claesen told Ars after publication of this story that he was aware of “issues raised concerning our role dealing with child sexual abuse material online and of the wider debate on how to tackle illegal online content,” but said a human rights' audit of the charity, carried out in 2013 by the UK's former public prosecution director Lord Ken Macdonald, was broadly positive.

Since then, he added, the IWF had taken on board Macdonald’s recommendations, and strengthened procedures and operations where possible.

“Our mission is to eliminate child sexual abuse imagery online. The images and videos we work to remove are considered illegal by UK law, according to the levels in the Sentencing Council’s Sexual Offences Definitive Guidelines. Since April 2014, there have been three levels, A, B and C," said Claesen.

“Our most recent figures (Annual Report 2015) showed 34 percent of the webpages confirmed as containing child sexual abuse material were category A—which is the rape or sexual torture of children. Of the confirmed illegal webpages—69 percent depicted children assessed as under 10 years old. 1788 webpages appeared to depict children under two years old.”

Channel Ars Technica