A trade association representing big tech companies including Facebook, Amazon and Google has called on the EU to introduce a framework to limit their legal liabilities when proactively policing harmful content.
In a paper released on Monday, the European Digital Media Association said that instituting safeguards would improve moderation by incentivising service providers to take “reasonable, proportionate and feasible steps” against such material.
“All of our members take their responsibility very seriously and want to do more to tackle illegal content and activity online,” said Siada El Ramly, director-general of EDiMA. “A European legal safeguard for service providers would give them the leeway to use their resources and technology in creative ways in order to do so.”
Under the EU’s ecommerce directive, introduced in 2000, online platforms are not liable for material on their platforms unless they have “actual knowledge” that it is circulating there, for instance when a user informs them of it.
Platforms are also only obliged to remove harmful content when they are alerted to its existence.
But in the upcoming European Digital Services Act, the EU is considering implementing a range of new policies including incentivising tech companies to moderate their platforms more proactively.
According to the EDiMA report, if companies moved to this proactive model they would face an extra burden of liability for their content. This, the report argued, would necessitate a new safeguard similar to that contained in Section 230 of the US Communications Decency Act, which means that service providers’ actions to reduce illegal activities do not impact on their liability.
The report also raised further concerns over proactive monitoring, including that it would “infringe on the fundamental rights to speech and privacy”, and might have potential side-effects on competition, with smaller service providers likely to struggle to implement the necessary measures to comprehensively monitor content.
While questions over the adequacy of online moderation have long plagued online platforms, they have reached fever pitch this year amid a surge of misinformation around coronavirus, domestic and foreign disinformation campaigns in the US ahead of the presidential election, and the global spread of conspiracy theories such as QAnon.
Facebook’s independent oversight board, which will make decisions on what content should be allowed on the social network’s platforms and the fairness of its policies, started hearing its first content moderation appeal cases earlier this month.