Youtube, Twitter, and Facebook were issued notice as the UK appoints new Internet Regulator

The government of the UK has declared ambitious latest plans to monitor the internet which is going to give websites a “duty of care” to safeguard their UK users from content that is illegal and related to terrorism and child exploitation along with harmful content more generally. According to the proposal, the UK’s current broadcast regulator Ofcom will be responsible for invoking the latest rules, which are expected to include the power to fine internet companies that don’t comply.

A move to counter Offensive content 

Full set of details regarding the legislation along with Ofcom’s power to enforce it are expected to be declared later this spring. However, the government is going to mold the direction of the legislation, Ofcom will be provided with the flexibility to make sure, how to respond to the latest “online dangers” as they surface.

The proposal consists of two major requirements, The Guardian notes. The first is regarding illegal content, such as that depicting child sexual abuse or promoting terrorism, to be deleted quickly and even obviated from being posted in the first place.

For content that is only “harmful” rather than being illegal, online platforms will need to be genuine about what content and behavior are agreeable on their sites, and invoke those rules continuously and transparently. This includes content that may instigate or eulogize self-harm or suicide. The government thinks that flexibility is important to safeguard users’ rights online, including free speech and press freedoms.

According to the government, the regulation will apply to any websites that permit user-generated content, which consists of forums, comments, or video sharing. This definition implies that it is not just going to be social media networks that will be impacted by the regulation. Sites that are deemed to pose a low risk to the general public are not going to be covered.

The UK isn’t the first country to do so

However, as we have observed, moderating online content is a big challenge even in the tech industry such as Facebook and Google that can easily afford it. It’s likely to be an even more tedious job for smaller organizations.

Ofcom said in a statement that it welcomed the decision to be chosen as a regulator. “We will work with the Government to help ensure that regulation provides effective protection for people online and, if appointed, will consider what voluntary steps can be taken in advance of legislation,” it said.

The UK government’s most current plan to moderate content on the internet was a failure. Late last year the country scooped its long-delayed plans to implement age verification for accessing and consuming online porn. The proposals were widely scrutinized for raising privacy issues, as well as combining control with a big porn conglomerate MindGeek, which was poised to offer the age verification system.

The UK isn’t the first country in Europe which is looking for more stringent controls over the internet. Bloomberg notes that two years ago Germany began invoking new laws governing online hate speech and fake news, with fines up to €50 million for sites that fail to delete offensive content.

Hello, I’m Anna Yeo. If you like my news coverage, please drop a good word in my inbox. I’m journalist by profession and have been part of many major reporting across the globe. I like to write crisp and factual news. I have completed my masters degree in journalism. Feel free to contact me at [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *