Social Media Use: Minimum Age Debate
Full Debate: Read Full DebateSiân Berry
Main Page: Siân Berry (Green Party - Brighton Pavilion)Department Debates - View all Siân Berry's debates with the Department for Science, Innovation & Technology
(1 day, 16 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I thank Kim Campbell and the petitioners, including almost 400 from East Hampshire, for bringing this debate to Parliament. There has been a lot of interest of late in Australia’s upcoming ban on social media for under-16s, and I was interested in how the Australians are going to implement it, considering some of the complexities and definitional difficulties. I recommend to colleagues a very good interview on American National Public Radio with Australia’s eSafety Commissioner, in which she said that it is not about flicking some big switch. She said that there was a possibility that some social media functionality could be removed, rather than an entire app being blocked; that
“messaging and gaming sites and anything that delivers education or health care information”
would be exempt; and that, ultimately, it would be for the Minister for Communications to
“decide which platforms are in and which are out.”
Well, I hope they have invested in their legal defence budgets.
It is true that parents vary widely in what they think is good or acceptable. Everybody agrees that their child should be able to call or text home to let mum or dad know that they are delayed or feeling worried, or that their club has been cancelled. Some also value things that can be done only on a smartphone—such as using a map to find the way home—and there is a whole other debate about education technology and the use of Show My Homework and all the rest of it. Some parents are totally happy with the entirety of the electronic world—smartphones and social media. Let us be honest: it is parents who often help children get around the minimum age limit to be on these platforms. Sometimes, we say that they do that only for fear of the child missing out, and that may be true, but we do not know that it is in the majority of cases.
In addressing these questions as legislators, we often fall back on saying, “Hang on, we’re not talking about banning all phones; we’re talking specifically about smartphones. And we’re not talking about getting rid of the good stuff; we’re only talking about getting rid of the bad stuff.” This, of course, is the easy stage in the legislative process, and things become much harder later, when we have to define precisely what we mean. I am about to recommit that sin: I am going to talk about an ill-defined “it” that we may in some way want to restrict. That “it” is something about smartphones and social media that I will today fail to define, but I hope to come back at the end to say a little about more precisely what I mean.
I am not in the business of trying to put new restrictions on how parents manage their families or of trying to do things to them that they could do for themselves. There is already a minimum age for using social media; it just happens to be an arbitrary age that is based on some legislation—not even from this country—from the 1990s. When the GDPR came in through the European Union, which we were in then, countries could choose an age anywhere between 13 and 16. Different countries chose different ages; we happen to have settled on 13. Most people would say that we have to set the bar somewhere, so the question becomes, where? Of course, we could, alternatively, say that the Government or a regulator have no role in setting an age at all. However, if that is not our view, and we accept that there should be an age, we have to ask the secondary question: what should it be? There is no ancient right to be on TikTok at age 13. These are novel technologies, and we are facing these questions now for the first time.
In this country, there are two main thresholds for the transition from childhood to adulthood, and they are 16 and 18. Those are not the only ones, but they are the main ones. In English law, there has never been a concept of an age of digital consent and nor, to my knowledge, was there a non-digital concept of consent in contract law previously for somebody under the age of majority. I grant that it is arguable, but it seems that 16 is the most appropriate threshold.
I have met parents from Smartphone Free Childhood, but also young people. This is a big issue in Brighton Pavilion. Has the right hon. Member thought about pushing for the Minister and Members to talk more with young people about where the age limit should lie, rather than trying to come up with a number in the middle of a debate? It is clear from talking to young people that they feel that parts of social media are very toxic, but I also think they are best placed to judge where the limit should lie.
To be fair to the Minister and previous Ministers, I think they do make efforts to hear from young people. An interesting survey by the Youth Endowment Fund, which I commend to the hon. Lady and others, put an extreme proposition to 13 to 17-year-olds: “If you could turn off social media forever for you and everybody else, would you do it?” While a majority did not say yes to that extreme proposition, something like a third did. We also have various other surveys.
It is true that when we talk to children, as I am sure many colleagues have done in schools across their constituencies, we get a variety of views. In particular, children do not want to be left out, and as parents we do not want that for our children either. If everybody else is in a certain group or has a certain means of communication, we tend to want our children to have that too.
The evidence is not perfect. There is even evidence that some screen time is a positive good. A programme for international student assessment study in 2019 talked about a Goldilocks effect, where about an hour of screen time was beneficial for mental wellbeing, after which the benefit declined. That same study found wide differences in life satisfaction between what it called “extreme internet users” and others. There are now plenty of other studies on everything from happiness, the quality of relationships and eyesight to the effect on sleep and concentration.
There is also the rising incidence of mental ill health among teenagers, which—for the avoidance of doubt and to take politics out of it—is not unique to this country and not uniquely a post-covid effect. Causality is still hard to prove, but it seems extraordinary that, when we are talking about children, we allow something to happen because we cannot prove 100% that it causes harm, rather than allowing it to happen only if we can prove that it is safe. That is not the way we deal, for example, with children’s food or toys. I would turn the question around: are people really suggesting that the prevalence of self-harm is nothing to do with the prevalence and normalisation of certain imagery on social media?
The Online Safety Act was a landmark piece of legislation, and we will debate it again in Westminster Hall on Wednesday. Everybody who worked on it— including myself—was always clear that it would not be the last time we had to come back to this subject in legislation. It is inevitable that there will be further regulation and restrictions in the interests of greater child protection. I therefore urge the Government to move from working out whether there will be further protections to working out what those will be. Of course, to write legislation—to return to where I started—one needs to be able to define things precisely and, in reality, there is no bright line between a smartphone and a brick phone, and no slam-dunk definition of social media either.
It can be instructive to think about individual platforms and services. One of the things we worry about is TikTok. Do we worry about Snapchat? Yes, we probably do, because of the association with bullying and the disappearing messages. But some families like the snap friends function, because they can see where different family members are. Do we worry about Instagram? Yes, we probably do, and it has a particular association with issues around body image. But it is also a way for people to share lovely family photos, and for extended families to keep in touch.
A lot of families allow children to have WhatsApp, when they would not allow them to have TikTok, and up until quite recently, some would not even have called it a social media platform. Where we think we have problems with disinformation on TikTok and Facebook, other countries have them with WhatsApp. What about YouTube? For many people, YouTube is not social media; it is a place where they go to watch videos or for music. But because it has user-generated content, it is also social media; it is certainly capable of sucking up a lot of young people’s time, and it has potential rabbit holes that people can fall down.
What about gaming? Gaming is different from social media, but modern gaming also has quite a lot of social media-like functions, such as lists of friends. Certainly, it is a way of trying to create communities of people with common interests. It is also often linked to the use of Discord or to streaming on Twitch. And, again, it certainly takes up a lot of time—unless, of course, someone is in China, where the Government will allow them to do it for only one hour a day, on Fridays, Saturdays and Sundays.
All of the above have risks attached, and they all have negatives, but we are unlikely to say that we want to ban them all—far from it. There is also a different risk: if we take one thing and ban it based on its specific features—its specific definition—we just push people to other places. Other things will then get more social-media characteristics, and children may end up in darker places on the internet. All of that is probably why the Australians ended up where they did: saying that it is probably more about specific functionality and that, at the end, it might be about having to make case-by-case judgments.
We worry about content; unwanted, inappropriate contact, as others have said; the excessive time children spend on platforms; potential addiction; the effects on sleep and concentration; and myopia. Crucially—my hon. Friend the Member for Reigate (Rebecca Paul) covered this very well—these technologies can also crowd out other things. Whether they, in and of themselves, are good or bad, there are only 24 hours in a day, and we want children, in the time they are not at school and not asleep, to be able to access the full range of things that childhood should be all about.