UK Government Proposes Online Safety Bill Which May End Up Being A #Fail

This could be interesting, or go horribly sideways for the United Kingdom. I say that because they have a new proposed law called the Online Safety Bill. And if execs of tech companies don’t comply, this could happen to the:

Proposed UK laws could see top managers at tech companies be jailed if they fail to meet the demands of regulators. The laws, coming in the form of an Online Safety Bill, were introduced to Parliament on Thursday after almost a year of consultation. The UK government commenced work on the proposed laws in May last year to push a duty of care onto social media platforms so that tech companies are forced to protect users from dangerous content, such as disinformation and online abuse. 

Under the proposed legislation, executives of tech companies could face prosecution or jail time if they fail to cooperate with information notices issued by Ofcom, UK’s communications regulator. Through the Bill, Ofcom would gain the power to issue information notices for the purpose of determining whether tech companies are performing their online safety functions. A raft of new offenses have also been added to the Bill, including making in-scope companies’ senior managers criminally liable if they destroy evidence, fail to attend or provide false information in interviews with Ofcom, or obstruct the regulator when it enters company offices. 

The Bill also looks to require social media platforms, search engines, and other apps and websites that allow people to post their own content to implement various measures to protect children, tackle illegal activity and uphold their stated terms and conditions. Among these measures are mandatory age checks for sites that host pornography, criminalizing cyberflashing, and a requirement for large social media platforms to give adults the ability to automatically block people who have not verified their identity on the platforms. The proposed laws, if passed, would also force social media platforms to up their moderation efforts, with the Bill calling for platforms to remove paid-for scam ads swiftly once they are alerted of their existence. A requirement for social media platforms to moderate “legal but harmful” content is also contained in the Bill, which will make large social media platforms have a duty to carry risk assessments on these types of content. Platforms will also have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently.

This legislation is being proposed with good intentions, but the devil is in the detail as always. For starters, differentiating between harm and free speech is fraught with difficulty. Some subjective test doesn’t really give the sort of certainty technology companies who might decide to take a cautious approach to what they allow on their sites that ends up stifling free speech, open discussion and potentially useful content with controversial themes. Not to mention the fact that some tech companies may simply pull out of the UK rather than deal with this bill. And then there’s the fact that there’s any number of ways to circumvent age checks and material that would normally not be seen in the UK under this proposed bill. In short, I think this is doomed to fail ultimately. But it will likely pass anyway and the havoc that it will cause will be long lasting as a result.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: