Insights
We examine Government plans to strengthen online safety legislation
Charities have been calling for children and vulnerable adults to be better protected online for several years.
Among the organisations calling for ramped up online harms legislation is children’s charity NSPCC, which has been lobbying via its #WildWestWeb campaign since 2018.
The Government responded in December 2020 by publishing its Online Harms White Paper. This detailed plans for new online safety standards, countering harmful content, and introducing powers to hold social media companies to account.
In May 2021, firm plans for legislation were announced, with ministers publishing details of the draft Online Safety Bill. This proposed legislation is currently working its way through the parliamentary process.
In launching the legislation, then Culture Secretary Oliver Dowden said: “We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”
Here we look at the details of the Bill and some of the criticisms it has received, including concerns raised by the NSPCC and other charities that support vulnerable people.
The Online Harms White Paper’s focus on a duty of care for tech companies to protect users is retained. Under the Bill, companies “will need to take robust action to tackle illegal abuse”. This includes tackling hate crime, harassment and threats.
Social media sites must also tackle content that is “lawful but still harmful”, including the promotion of self-harm, misinformation, and disinformation.
Ofcom will be able to hold companies to account, with the Bill retaining the White Paper’s call for fines of up to £18 million or 10% of annual global turnover, whichever is higher. The regulator will also have the power to block access to sites.
People’s right to “express themselves freely online” is also included in the Online Safety Bill. People who have content removed need to have effective routes to appeal and have content reinstated if found to have been removed unfairly. Ofcom will be able to hear appeals and complaints around the removal of content.
The aim of this element of the bill is to ensure companies do not “adopt restrictive measures or over-remove content” in meeting their new online safety duties, says the Government.
Online content defined as “democratically important” will be protected in law. This includes content that promotes or opposes Government policy or a political party.
Social media companies will not be permitted to discriminate against political viewpoints and will need to set up clear policies to protect content.
An example the Bill gives is if a social media company chooses to ban graphically violent content. However, if a campaign group released such footage, to raise awareness about violence against a specific group of people, this content could be allowed “given its importance to the democratic debate”.
Similarly, journalistic content must be protected, says the Bill. This applies to both professional and citizen journalists’ content.
Under the Bill, online companies must take responsibility for tackling fraud from user-generated content.
This includes romance and investment scams online, including through private Facebook groups and sent via Snapchat.
Romance fraud is where a victim is tricked into thinking they are striking up a relationship with someone online, who is in fact a fraudster seeking money or personal information.
The NSPCC is concerned that the Online Safety Bill will “fall significantly short of tackling” online child abuse.
“There are substantive weaknesses” in the Bill, says NSPCC chief executive Peter Wanless, who adds that it “fails to prevent inherently avoidable abuse or reflect the magnitude and complexity of online risks to children”.
The Bill meets just nine of the charity’s 27 indicators of effective online safety legislation and “a further 10 remain largely or completely unmet, adds the NSPCC.
The charity is calling for tougher legislation to tackle groomers’ use of multiple platforms to target children on social networks then escalating their grooming to live streaming or encrypted messaging sites.
More is needed to tackle abusers use of ‘digital breadcrumbs’ that signpost child abuse images. The NSPCC says the Bill needs to treat behaviour that facilitates child abuse with the same severity as the posting of illegal material. This will help stamp out abuse at an earlier stage, adds the charity.
The Bill only applies to social media companies with a “significant number of children” using their apps, the NSPCC also warns. This could mean that sites such as Telegram and OnlyFans are excluded from the Bill’s duty of care to protect children.
The NSPCC also wants to see a ‘named persons scheme’ so that senior tech company managers are personally liable for safeguarding and be subject to criminal sanctions. The Government has said this could be introduced later “if tech firms don’t step up their efforts to improve safety”.
Concerns have also been raised around the Bill’s potential impact on free speech. Politicians and social media companies are concerned over the ambiguity of what constitutes “lawful but harmful content” and “misinformation and disinformation”.
Conservative MP David Davis has said the Bill’s “chilling effect on free speech will be terrible”, while Twitter says the legislation needs more clarity.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.