UK may impose two-hour social media limit for under-16s
Children across the country could soon face restrictions on the amount of time they are able to spend on social media platforms, as the government seeks to strengthen safeguards for young people online.
Technology Secretary Peter Kyle has made clear his intention to address what he described as "compulsive behaviour" among the youth, signalling that the government is considering both a daily two-hour limit as well as app curfews for users under 16. An official announcement on the specifics of these plans is expected to be made "in the near future".
The proposed move follows mounting concern over the impact of excessive social media use on the mental health and wellbeing of children. Despite these concerns, industry experts and legal professionals have raised questions about how any such restrictions might actually be enforced in practice.
Iona Silverman, IP and Media Partner at Freeths, noted, "Although Ofcom has made a start in regulating the highest risk platforms, it needs to move to tackle everyday harms, such as excessive exposure to social media. The suggestion of a two-hour limit is interesting: some might argue that two hours on social media is too long in itself."
Silverman added, "The real question that springs to mind is, how will this be enforced? Given most young people are using a number of different social media platforms, possibly across a number of different devices, it will be incredibly hard to police any kind of cap. In addition to legislative change and increased enforcement from regulators, we need to see a cultural shift. Parents need to have more open conversations with children and young people about social media and should also try to model the use they want their children to see."
As the government prepares its proposals, new online protections for children are already coming into effect. From this week, technology firms are required to comply with Ofcom's Protection of Children Code of Practise, a set of guidelines designed to help firms fulfil their obligations under the Online Safety Act.
The rules, which apply to so-called "risky" sites and apps, include requirements for "highly effective" age verification systems. These systems are intended to shield children from harmful content, such as pornography, as well as material relating to self-harm, suicide, eating disorders and extreme violence.
The development marks a significant milestone, but critics contend the reforms have not gone far enough. Mark Jones, Partner and online safety expert at law firm Payne Hicks Beach, commented, "More could be done by Ofcom and its Protection of Children Code of Practise. It has taken too long to get to this point. The Online Safety Act came into force in October 2023 and yet it has taken a further 19 months for these Codes to come into force and during that time significant numbers of children have been exposed to harm."
Jones also pointed to gaps in the new system, noting that while service providers are given guidelines, they have the flexibility to decide for themselves how to comply. "This flexibility risks too much power resting with service providers," he said.
In addition, encrypted messaging services remain a particular area of concern, as there is little clarity over whether apps such as WhatsApp and Signal will be required to scan private messages for harmful material.
Jones warned that rapid advances in technology, particularly artificial intelligence, could soon outpace current regulation, leaving children exposed. "The government is already referring to the Codes as the 'foundation' for a safer online experience for children and that it does not mark the end of the conversation. In my view this underlines that there is more to be done to protect children and young people from online harms," he said.
Online safety campaigners have echoed these concerns, suggesting that the changes represent a missed opportunity to provide parents with the confidence that their children are truly protected. They argue that Ofcom's regulations, while a step forward, fall short in their oversight of the tech industry, raising fears that the needs of children could be eclipsed by business interests.
As these measures take effect, the debate over how best to safeguard children on social media and online platforms looks set to continue. The implementation of age limits, time restrictions and better parental engagement are likely to be central to the evolving conversation around children's welfare in the digital era.