LONDON — Britain proposed sweeping new government powers to regulate the internet to combat the spread of violent and extremist content, false information and harmful material aimed at children. The proposal, announced on Monday, would be one of the world’s most aggressive actions to rein in the most corrosive online content.
The recommendations, backed by Prime Minister Theresa May, take direct aim at Facebook, Google and other large internet platforms that policymakers believe have made growth and profits a priority over curbing harmful material. The government called for naming an internet regulator with the power to issue fines, block access to websites if necessary and make individual executives legally liable for harmful content spread on their platforms.
“The internet can be brilliant at connecting people across the world, but for too long these companies have not done enough to protect users, especially children and young people, from harmful content,” Ms. May said in a statement. “That is not good enough, and it is time to do things differently.”
By introducing the proposal even as a deadline nears for Britain to exit the European Union, Ms. May signaled that enacting the new internet regulations was a high priority of her government.
Around the world, governments are having a broader debate about reining in the internet and whether their actions will restrict free expression. Frustrated by what they see as a lack of progress to protect users, governments are passing or considering new laws to define what classifies as acceptable online communication. Advocates for human rights and free speech warn that the policies will lead to censorship and be exploited by more repressive governments.
After the massacre of 50 people at two mosques in New Zealand in March, calls have increased globally for internet regulation. The authorities say the gunman distributed a racist manifesto online before using Facebook to live-stream the shooting. In response, Australia passed a law last week that threatens fines for social media companies and jail for their executives if they fail to rapidly remove “abhorrent violent material” from their platforms. New Zealand is also considering new restrictions.
In Singapore, draft legislation was introduced last week that supporters said would restrict the spread of false and misleading information. But opponents have warned it could be used to go after critics of the government. India has also proposed broad new powers to regulate internet content. The European Union is debating a new terrorism content measure that some have warned is overly broad and will harm free expression. And Germany last year began prohibiting hate speech. Digital rights groups have criticized the law as leading to the unnecessary blocking of legitimate content.
In response to the political backlash, internet companies have toughened their policies and hired tens of thousands of human moderators to screen for problematic material. But with billions of pieces of content being shared each day on services like Facebook, Instagram and YouTube, deciphering what is harmful isn’t always an easy task, and governments have criticized the companies for not being aggressive enough.
In recent weeks, Mark Zuckerberg, Facebook’s chief executive, has said new regulations are needed, particularly to more clearly define what is acceptable content so companies aren’t the main judges.
And Facebook took a similar stance in response to the British proposal. “New regulations are needed so that we have a standardized approach across platforms and private companies aren’t making so many important decisions alone,” Facebook said in a statement. “These are complex issues to get right and we look forward to working with the government and Parliament to ensure new regulations are effective.”
Twitter said in a statement that it would “engage in the discussion between industry and the U.K. government, as well as work to strike an appropriate balance between keeping users safe and preserving the internet’s open, free nature.”
Google had no immediate statement.
British officials are calling for the creation of a mandatory “duty of care” standard intended “to make companies take responsibility for the safety of their users and to tackle harm caused by content or activity on their services.”
The government listed a number of topics that companies could be required to address or risk fines and other penalties. The list includes material thought to be supporting terrorism, inciting violence, encouraging suicide, disinformation, cyberbullying and inappropriate material accessible to children. The rules would apply to social media platforms, discussion forums, messaging services and search engines.
Actions in Britain and elsewhere signal a new era for the internet. Western democracies have largely avoided regulating online communication. But as evidence mounts that online actions are having harmful real-life consequences by contributing to violence, compromising elections and spreading hateful ideologies, governments are becoming more willing to intervene.
In the United States, where free speech is more of a core value than other nations, there’s been less momentum to regulate internet content. But in Washington this week, the House Judiciary Committee will question executives from Facebook and Google on the spread of white nationalism on social media.
“The era of self-regulation for online companies is over,” Jeremy Wright, Britain’s digital secretary, said in a statement. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.” Technology,” he added, “can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”