(1 week, 5 days ago)
General CommitteesIt is a pleasure to serve under your chairmanship, Mr Dowd. I am happy to confirm that the Opposition will support these regulations, not least because, as the Minister has said, they complement the previous Government’s work on the Online Safety Act, and I was the Minister responsible for implementing the Act from when it received Royal Assent until the general election.
I take great pride in having served in the Government that introduced and passed the Online Safety Act. It places significant new responsibilities and duties on social media companies, platforms and services to increase safety online. However, most importantly, this vital piece of legislation ensures that children are better protected online. Having just attended a roundtable where we listened to victims of online abuse, I know that that is more important than ever. The Minister will share my thoughts on that. I share his sentiment—the Opposition will work with the Government to make sure that victims of online abuse receive justice and are supported and protected.
It is worrying and sad that almost three quarters of teenagers between 13 and 17 have encountered one or more potential harms online, and that three in five secondary school-aged children have been contacted online in a way that potentially made them feel uncomfortable. It is for those reasons that we ensured that the strongest measures in the Online Safety Act protect children. For example, platforms are required to prevent children from accessing harmful and age-inappropriate content and to provide parents and children with clear and accessible ways to report problems online when they arise. Furthermore, the Act requires all in-scope services that allow pornography to use highly effective age assurance to prevent children from accessing it, including services that host user-generated content and services that publish pornography. Ofcom has robust enforcement powers available against companies that fail to fulfil their duties.
The Online Safety Act also includes provisions to protect adult users, as it ensures that major platforms are more transparent about which kinds of potentially harmful content they allow. It gives users more control over the types of content they want to see. I note that Ofcom expects the illegal harm safety duties to become enforceable around March 2025, following Ofcom’s publication of its illegal harm statement in December 2024. Does the Minister agree that platforms do not need to wait for those milestones, as I often said, and should already be taking action to improve safety on their sites? Can he confirm that he is encouraging platforms to take proactive action in advance of any deadlines?
Separately from the Online Safety Act, the last Government also launched the pornography review, which explores the effectiveness of regulation, legislation and the law enforcement response to pornography. Can the Minister provide a reassurance that the review’s final report is on schedule and will be published before the end of the year? Can he also clarify whether the review will consider the impact of violent and harmful pornography on women and girls? I would be grateful for the Minister’s comments on those points and for his co-operation throughout his tenure. I am happy to add our support to these regulations, and to see that the previous Government’s pivotal piece of legislation is making the UK the safest place in the world for a child to be online.
(1 month ago)
Commons ChamberThe Secretary of State, in one of his first acts in his new role, cut £1.3 billion-worth of funding that would have been transformative for enabling cutting-edge research and development in Britain. I note that he has also ditched our ambition to turn Britain into a science and technology superpower. We set a target of £20 billion for R&D, which we met, but he has set no such target. Will he be setting a target, and can he today promise that there will be no cuts to R&D expenditure?
I congratulate the hon. Gentleman on his appointment to his Front-Bench role. Let us just be honest about what this Government inherited. That £20 billion black hole affects every single Department across Government. My Department inherited a situation where the previous Government—including the former Chancellor, the right hon. Member for Godalming and Ash (Jeremy Hunt), who is sitting on the Opposition Front Bench—committed at this Dispatch Box to an exascale project to which not one single penny had been committed. That was a fraud committed on the scientific community of our country by that Government, and I had to make the difficult decision to move forward—
(2 months, 2 weeks ago)
Commons ChamberI would like to start by paying tribute to all the Members who made their maiden speeches. I congratulate each and every one of them on that nerve-racking experience. We may not always agree on many things or we may agree on a lot, but clearly they will all be formidable contributors to this House.
I would also like to welcome the ministerial team to their place—and the new Secretary of State. As the shadow Secretary of State, my hon. Friend the Member for Arundel and South Downs (Andrew Griffith) said in his opening remarks, the Front Bench have my utmost respect for serving in public office. Being a Minister is a great privilege and we know that it also places burdens on those closest around us, so I genuinely wish them well. The civil servants I worked with in the Department, including those in my private office, were hard-working, dynamic and top notch. I am sure Ministers will have the same experience.
As His Majesty’s Opposition, we will of course hold the Government to account. We will challenge them where challenge is required, but let me be clear: our sole intention is only to ensure that the United Kingdom remains at the forefront of global innovation and technological advancement. Ministers may not believe me when I say it, but I do want them to be successful because their successes are the nation’s successes.
It is in that spirit that I welcome some of the announcements on enhancing technological use in the public sector, because, as has been said, productivity in the public sector lags behind that of the private sector. The private sector has largely recovered from the pandemic, but the public sector remains less efficient than it was before. That is important for two reasons. First, the public sector—our services that we are so privileged to have, whether the national health service or our police, to name just two—represents 20% of the national output. Improving technology in our services means improving the very services that the British public rely on. Secondly, public services are funded by taxpayers’ money. It is not our money or the Labour party’s money. Hard-working British people pay their taxes for these services, so it is morally right that we all do all we can to make our public services fit for purpose and as efficient as possible.
That is why the previous Conservative Government launched the comprehensive public service productivity review to address low levels of public sector productivity. In our national health service, we utilised AI to cut administration to keep more staff on the frontline and increase the speed of diagnosis, including better diagnoses of stroke, and lung and breast cancers. Our police officers were kept on the streets, rather than pushing pens, with the use of technology to speed up simple administrative tasks, which meant that crime fell in every part of the UK—except, of course, in Labour-run London. We see and welcome the value of better technological uses in the public sector.
I have some questions for the Minister and for the Secretary of State, which I hope the Minister will be able to answer on his behalf. First, over the summer we were all appalled by the riots that gripped the nation and the role that social media played. Digital accessibility for our most vulnerable people matters, and part of that must be that the public have trust in using social media and online platforms. Antisemitism and anti-Muslim incidents have seen a huge rise online. Ministers will have met social media companies, including X. Will the Minister please clarify what actions will now be taken, as we move forward, to ensure that social media is not used to perpetuate and amplify antisemitic and anti-Muslim hatred?
Secondly, one of my last acts as Minister for Tech and the Digital Economy was to instruct officials in the Department to start reviewing and refreshing the Government’s digital inclusion strategy, and to present options. The timing of the general election prevented that work from making progress, but there is no reason why it cannot continue now. Will the Minister confirm that she will continue that work? Will she commit to ensuring that the necessary funds are put aside, so our public services are more digitally inclusive? Conservatives recognise the importance of making our public services truly accessible. As the last review occurred 10 years ago, will she commit to ensuring that the review is carried out in a timely manner? If our public services are to be truly inclusive, they must be digitally inclusive, and I look forward to hearing the Minister’s response.
I am also keen to know what happened to the AI Bill, the legislation that the Secretary of State promised so often when he was in opposition, and what he intends to do about AI regulation. Perhaps the Minister can tell us what assessment she has made of the number of AI companies that will be created as a result of his plans, how much investment will be generated, and how many new jobs there will be. Why was the AI Bill not ready for the King’s Speech?
Let me now turn to the Secretary of State. He has had a very busy summer, but whether it was productive is another question. Countries across the world are brimming with ambition, investing in some of the most exciting and transformative technologies, such as artificial intelligence, and ensuring that they are at the forefront of global technological innovation. Let there be no doubt that this is a global race, and I fear that in its first few months in office the Labour party may already have done enough to ensure that this great nation of ours never comes close to winning that race.
The previous Conservative Government set out an ambition to be a science and tech superpower by 2030. I note that the Secretary of State has not shared that ambition, so imagine my surprise when I saw that in one of the Government’s first big moments, one of their first big acts was to cut £1.3 billion of investment in supercomputer capability and related research funding. The Secretary of State talks about being a partner to the tech industry. Well, on hearing the news of the cutting of exascale funding, one tech entrepreneur said to me, “With friends like these, who needs enemies?”
Perhaps the Secretary of State—or the Minister, on his behalf—could clarify whether he fought against that decision or endorsed it. Was he able to stand up to the Chancellor, or was he so intimidated by her that he lost his voice? We know that the Prime Minister was unsettled by that portrait of Margaret Thatcher, so perhaps the Secretary of State was similarly unsettled by the Chancellor. Did he even bother to fight for Britain’s AI and tech entrepreneurs, or were the trappings of ministerial office so enticing that he forgot to defend the single most important investment that would have ensured that we maintained our top position in the global AI race for decades to come?
But let me offer the hand of friendship. [Laughter.] I assure the Secretary of State that the hand of friendship exists. If he is worried about standing up to the Chancellor, we on this side of the House will of course support him. He does not need to be afraid. We believe in economic growth, so we will help him to stand up to the Chancellor. After all, his successes will be the nation’s successes, and that is our priority.
Let me move on from exascale. Over the summer, it became clear that the Labour Government had capitulated to the junior doctors and given them inflation-busting pay rises without asking for any modernisation or efficiency improvements in return. Before that decision, did the Secretary of State meet the Health Secretary and insist on efficiency improvements or better use of technology, or was giving in to Labour’s trade union paymasters more important? He did say that they were joined at the hip, so perhaps he will be able to show what he did to fight for the tech entrepreneurs of this country. I note that my right hon. Friend the Member for Basildon and Billericay (Mr Holden) asked him a question that he did not answer.
I also note that the Government have been silent on the NHS productivity review, which was backed by more than £3.4 billion. Again, he did not answer a question, asked on this occasion by my hon. Friend the Member for Hinckley and Bosworth (Dr Evans). Can the Minister now confirm that that funding is safe, or is it part of the Chancellor’s “black hole” calculations? Our plan and our review were backed by the NHS and would have saved 13 million clinician hours. What, actually, are the Labour party’s plans?
When the Transport Secretary capitulated to the transport unions, did this Secretary of State meet her? Did he insist on better use of technology to improve our transport system, so that he could benefit consumers and protect taxpayers, or did he just watch from the sidelines and drink the Kool-Aid? This may have passed him by, but so far, in its first two months in office, the Labour party has already handed out £14 billion to its trade union paymasters in no-strings-attached public sector pay deals. So it is all well and good for the Secretary of State to grandstand at the Dispatch Box, but the facts are painting a different picture. I just hope that he can find his voice and stand up for the tech sector before it is too late.
Let me explain why this is so important. We have already heard the Secretary of State repeat the farcical claims about the Chancellor’s “black hole”, having inherited a tech economy that was the third most valuable in the world; a tech economy that was being recognised across the world for its ability to nurture more tech unicorns—that is, more companies valued at £1 billion—than France, Germany and Sweden combined; a tech economy that was growing and creating millions of jobs annually, and attracting billions of pounds-worth of investment.
If the Secretary of State did make the decision to cut the £1.3 billion of exascale funding and now talks down that same tech economy, and in doing so undermines those very tech entrepreneurs who will help to fund our public services for decades to come and makes it less attractive for investors to invest, he cannot sincerely stand at the Dispatch Box and argue that he believes in economic growth—not if his first major economic act was one of economic mutilation. I implore him to go back to the Chancellor and challenge this decision. He should not let the Chancellor’s political games undermine him or the tech industry, which has so much potential. It is in his power to ensure that we nurture tech innovation so that the tech start-ups of today can become the tech giants of tomorrow. I say to the Secretary of State that he should not squander this opportunity. Otherwise, his legacy will be defined by what he did not do, rather than what he did do.
(5 months, 4 weeks ago)
Commons ChamberWith permission, Mr Deputy Speaker, I shall make a statement on the AI Seoul summit, which the Government co-hosted with the Republic of Korea earlier this week.
The AI Seoul summit built on the legacy of the first AI safety summit, hosted by the UK at Bletchley Park in November 2023. At Bletchley, 28 countries and the European Union, representing the majority of the world’s population, signed the Bletchley declaration agreeing that, for the good of all, artificial intelligence should be designed, developed, deployed and used in a manner that is safe, human-centric, trustworthy and responsible. The same set of countries agreed to support the development of an international, independent and inclusive report to facilitate a shared science-based understanding of the risks associated with frontier AI.
At the same time, the UK announced the launch of our AI Safety Institute, the world’s first Government-backed organisation dedicated to advanced AI safety for the public good. World leaders, together with the leaders of the foremost frontier AI companies, agreed to the principle that states have a role in testing the most advanced models.
Since Bletchley, the UK has led by example with impressive progress on AI safety, both domestically and bilaterally. The AI Safety Institute has built up its capabilities for state-of-the-art safety testing. It has conducted its first pre-deployment testing for potential harmful capabilities on advanced AI systems, set out its approach to evaluations and published its first full results. That success is testament to the world-class technical talent that the institute has hired.
Earlier this week, the Secretary of State announced the launch of an office in San Francisco that will broaden the institute’s technical expertise and cement its position as a global authority on AI safety. The Secretary of State also announced a landmark agreement with the United States earlier this year that will enable our institutes to work together seamlessly on AI safety. We have also announced high-level partnerships with France, Singapore and Canada.
As AI continues to develop at an astonishing pace, we have redoubled our international efforts to make progress on AI safety. Earlier this week, just six months after the first AI safety summit, the Secretary of State was in the Republic of Korea for the AI Seoul summit, where the same countries came together again to build on the progress we made at Bletchley. Since the UK launched our AI Safety Institute six months ago, other countries have followed suit; the United States, Canada, Japan, Singapore, the Republic of Korea and the EU have all established state-backed organisations dedicated to frontier AI safety. On Tuesday, world leaders agreed to bring those institutes into a global network, showcasing the Bletchley effect in action. Coming together, the network will build “complementarity and interoperability” between their technical work and approaches to AI safety, to promote the safe, secure and trustworthy development of AI.
As part of the network, participants will share information about models, and their limitations, capabilities and risk. Participants will also monitor and share information about specific AI harms and safety incidents, where they occur. Collaboration with overseas counterparts via the network will be fundamental to making sure that innovation in AI can continue, with safety, security and trust at its core.
Tuesday’s meeting also marked an historic moment, as 16 leading companies signed the frontier AI safety commitments, pledging to improve AI safety and to refrain from releasing new models if the risks are too high. The companies signing the commitments are based right across the world, including in the US, the EU, China and the middle east. Unless they have already done so, leading AI developers will now publish safety frameworks on how they will measure the risks of their frontier AI models before the AI action summit, which is to be held in France in early 2025. The frameworks will outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure that thresholds are not surpassed. In the most extreme circumstances, the companies have also committed to
“not develop or deploy a model or system at all”
if mitigations cannot keep risks below the thresholds. To define those thresholds, companies will take input from trusted actors, including home Governments, as appropriate, before releasing them ahead of the AI action summit.
On Wednesday, Ministers from more than 28 nations, the EU and the UN came together for further in depth discussions about AI safety, culminating in the agreement of the Seoul ministerial statement, in which countries agreed, for the first time, to develop shared risk thresholds for frontier AI development and deployment. Countries agreed to set thresholds for when model capabilities could pose “severe risks” without appropriate mitigations. This could include: helping malicious actors to acquire or use chemical or biological weapons; and AI’s potential ability to evade human oversight. That move marks an important first step as part of a wider push to develop global standards to address specific AI risks. As with the company commitments, countries agreed to develop proposals alongside AI companies, civil society and academia for discussion ahead of the AI action summit.
In the statement, countries also pledged to boost international co-operation on the science of AI safety, by supporting future reports on AI risk. That follows the publication of the interim “International Scientific Report on the Safety of Advanced AI” last week. Launched at Bletchley, the report unites a diverse global team of AI experts, including an expert advisory panel from 30 leading AI nations from around the world, as well as representatives from the UN and the EU, to bring together the best existing scientific research on AI capabilities and risks. The report aims to give policymakers across the globe a single source of information to inform their approaches to AI safety. The report is fully independent, under its chair, Turing award winner, Yoshua Bengio, but Britain has played a critical role by providing the secretariat for the report, based in our AI Safety Institute. To pull together such a report in just six months is an extraordinary achievement for the international community; Intergovernmental Panel on Climate Change reports, for example, are released every five to seven years.
Let me give the House a brief overview of the report’s findings. It recognises that advanced AI can be used to boost wellbeing, prosperity and new scientific breakthroughs, but notes that, as with all powerful technologies, current and future developments could cause harm. For example, malicious actors can use AI to spark large-scale disinformation campaigns, fraud and scams. Future advances in advanced AI could also pose wider risks, including labour market disruption and economic power imbalances and inequalities. The report also highlights that, although various methods exist for assessing the risk posed by advanced AI models, all have limitations. As is common with scientific syntheses, the report highlights a lack of universal agreement among AI experts on a range of topics, including the state of current AI capabilities and how these could evolve over time. The next iteration of the report will be published ahead of the AI action summit early next year.
Concluding the AI Seoul summit, countries discussed the importance of supporting AI innovation and inclusivity, which were at the core of the summit’s agenda. We recognised the transformative benefits of AI for the public sector, and committed to supporting an environment which nurtures easy access to AI-related resources for SMEs, start-ups and academia. We also welcomed the potential of AI to provide significant advances to resolve the world’s great challenges, such as climate change, global health, and food and energy security.
The Secretary of State and I are grateful for the dedication and leadership shown by the Republic of Korea in delivering a successful summit in Seoul, just six short months after the world came together in Bletchley Park. It was an important step forward but, just as at Bletchley, we are only just getting started. The rapid pace of AI development leaves us no time to rest on our laurels. We must match that speed with our own efforts if we are to grip the risks of this technology, and seize the limitless benefits it can bring to people in Britain and around the world.
The UK stands ready to work with France to ensure that the AI action summit continues the legacy that we began in Bletchley Park, and continued in Seoul, because this is not an opportunity we can afford to miss. The potential upsides of AI are simply immense, but we cannot forget that this is the most complex technology humanity has ever produced. As the Secretary of State said in Seoul, it is our responsibility to ensure that human wisdom keeps pace with human knowledge.
I commend the Secretary of State and the Prime Minister for all the work they have done on the issue, and I commend this statement to the House.
I am grateful to the Minister for advance sight of his statement.
I hope this is in order, Mr Deputy Speaker, because I note that the Minister for Employment, the hon. Member for Bury St Edmunds (Jo Churchill) is on the Front Bench, and that she is not standing at the general election. I know she has been very cross with me on occasions over the past few years—she is probably still cross with me now. [Interruption.] As the Minister says, she is only human. On a personal note, as we have both been cancer sufferers—or survivors—and have both had more than one rodeo on that, it is sad that she is leaving. I am sure she will continue to fight for patients with cancer and on many other issues, and I pay tribute to her. It has been a delight to work with her over these years; I hope she will forgive me one day.
The economic opportunities for our country through artificial intelligence are, of course, outstanding. With the right sense of mission and the right Government, we can make the most of this emerging technology to unlock transformative changes in our economy, our NHS and our public services. Let us just think of AI in medicine. It is a personal hope that it might soon be possible to have an AI app that can accurately assess whether a mole on somebody’s back, arm or leg—or the back of their head—is a potential skin cancer, such as melanoma. That could definitely save lives. We could say exactly the same about the diagnosis of brain injury, many other different kinds of cancer and many other parts of medicine There could be no more important issue to tackle, but I fear the Government have fluffed it again. Much as I like the Minister, his statement could have been written by ChatGPT.
I have a series of questions. First, let me ask about the
“shared risk thresholds for frontier AI development and deployment”,
which the Minister says Governments will be developing. How will they be drawn up? What legal force will they have in the UK, particularly if there is to be no legislation, as still seems to be in the mind of the Government?
Secondly, the Secretary of State hails the voluntary agreements from the summit as a success, but does that mean companies developing the most advanced AI are still marking their own homework, despite the potential risks?
Thirdly, the Minister referred several times to “malicious actors”. Which “malicious actors” is he referring to? Does that include state actors? If so, how is that work integrated with the cyber-security strategy for the UK? How will that be integrated with the cyber-security strategy during the general election campaign?
Fourthly, the Government’s own artificial intelligence adviser, Professor Yoshua Bengio, to whom the Minister referred, has said that it is obvious that more regulatory measures will be needed, by which he means regulations or legislation of some kind. Why, therefore, have the Government not even taken the steps that the United States has taken using President Biden’s Executive order?
Next, have the commitments made six months ago at the UK safety summit been kept, or are these voluntary agreements just empty words? Moreover, have the frontier AI companies, which took part in the Bletchley summit, shared their models with the AI Safety Institute before deploying them, as the Prime Minister pledged they would?
Next, the Government press release stated that China participated in person at the AI Seoul summit, so can the Minister just clear up whether it signed the ministerial statement? As the shadow Minister for creative industries, may I ask why there were no representatives of the creative industries at the AI summit? Why none at all, despite the fact that this is a £127 billion industry in the UK, and that many people in the creative industries are very concerned about the possibilities, the threats, the dangers and the risks associated with AI for remuneration of creators?
The code of practice working group, which the Government set up and which was aiming at an entirely voluntary code of conduct, has collapsed, so what is the plan now? The Government originally said that they would still consider legislation, so is that still in their mind?
I love this next phrase of the Minister’s. He said, “We are only just getting started”. Clearly, somebody did not do any editing. What on earth has taken the Government so long? A Labour Government would introduce binding regulation of the most powerful frontier AI companies, requiring them to report before they train models over a capability threshold, to conduct safety testing and evaluation and to maintain strong information security protections. Why have the Government not brought forward any of those measures, despite very strong advice from all of their advisers to do so?
Finally, does the Minister agree that artificial intelligence is there for humanity, and humanity is not there for artificial intelligence?
I share the sentiments that the hon. Gentleman expressed about my hon. Friend the Member for Bury St Edmunds (Jo Churchill). It was a very sweet thing that he said—the only sweet thing he has said from the Dispatch Box. My hon. Friend has been a great friend to me, giving me advice when I became a new father. Many people do not see the hard work that goes into the pastoral care that happens here, so I am personally very grateful to her. I know that she was just about to leave the Chamber, so I will let her do so. I just wanted to place on record my thanks and gratitude to her.
I am a bit disappointed with the hon. Member for Rhondda (Sir Chris Bryant), although I have a lot of time for him. Let me first address the important matter of healthcare. We obviously hugely focus on AI safety; we have taken a world-leading position on AI safety, which is what the Bletchley and the Seoul declarations were all about.
Ultimately, the hon. Member’s final statement about AI being for humanity is absolutely right. We will continue to work at pace to help build trust in AI, because it can be a transformative tool in a number of different spheres—whether it is in the public sector or in health, as the hon. Member quite rightly pointed out. On a personal note, I hope that, as a cancer survivor he has the very best of health for a long time to come.
Earlier this week, the Prime Minister spoke about how AI can help in the way that breast cancer scans are looked at. I often talk about Brainomix, which has been greatly helpful to 37 NHS trusts in the early identification of strokes. That means that three times more people are now living independently than was previously possible. AI can also be used in other critical pathways. Clearly, AI will be hugely important in the field of radiotherapy. The National Institute for Health and Care Excellence has already recommended that AI technologies are used in the NHS to help with the contouring of CT and MRI scans and to plan radiotherapy treatment and external therapy for patients.
The NHS AI Lab was set up in 2020 to accelerate the development and the deployment of safe, ethical and effective AI in healthcare. It is worth saying that the hon. Member should not underestimate the complexity of this issue .Earlier this year, I visited a start-up called Aival, which the Government helped to fund through Innovate UK. The success of the AI models varies depending on the different machines that are used and how they are calibrated, so independent verification of the AI models, and how they are employed in the health sector specifically, is very important.
In terms of malicious actors, the hon. Member will understand that I cannot go into specific details for obvious reasons, but I assure him, as someone who sits on the defending democracy taskforce, led by the Security Minister, that we have been looking at pace at how to protect our elections. I am confident that we are prepared, having taken a cross-governmental approach, including with our agencies. It is hugely important that we ensure that people can have trust in our democratic process.
The hon. Member is right that these are voluntary agreements. I was surprised by his response, because we said clearly in our response to the White Paper that we will keep the regulator-led approach, which we have invested money in. We have given £10 million to ensure that the regulator increases its capability in a whole sphere of areas. We have also said that we will not be afraid to legislate when the time is right. That is a key difference between what the Opposition talk about and what we are doing. Our plan is working, whereas the Opposition keep talking about legislating but cannot tell us what they would legislate for.
There is no robust detail. I see that has exercised the hon. Member, who is chuntering from a sedentary position. The Opposition just have no serious plan for this.
The results speak for themselves. Around two weeks ago, we had a number of significant investments and a significant amount of job creation in the UK, with investment from CoreWeave, and almost £2 billion—[Interruption.] Those on the Opposition Front Bench would do well to listen to this. We had £2 billion of investment. Scale AI has put its headquarters in the UK. That shows our world-leading position, which is exactly why we co-hosted the Seoul summit and will support the French when they have their AI action summit. It goes to show the huge difference in our approach. We see safety as an enabler of growth and innovation, and that is exactly what we are doing.
The work goes on with the creative industries. It is hugely important, and we will not shy away from the most difficult challenges that AI presents.
I thought the shadow Minister was wise to draw attention to the potential benefits of AI in particular for health research and treatment—notably brain injury, a subject in which he and I share a passionate interest—but foolish, if I might say so, to be churlish about the steps that the Government have already taken. The Government deserve great credit for taking a lead on this internationally, and establishing the first organisation dedicated to AI safety in the world.
I thank and congratulate the Minister on that, but in balancing the advantages and risks—the costs and benefits—will he be clear that the real risk is underestimating the effect that AI may have? The internet has already done immense damage, despite the heady optimism at the time it was launched. It has brutalised discourse and blurred the distinction between truth and fiction, and AI could go further to alter our very grasp of reality. I do not want to be apocalyptic, but that is the territory that we are in, and it requires the most considered treatment if we are not to let those risks become a nightmare.
I completely agree with my right hon. Friend. We recognise the risks and opportunities that AI presents. That is why we have tried to balance safety and innovation. I refer him to the Online Safety Act 2023, which is a technology agnostic piece of legislation. AI is covered by a range of spheres where the Act looks at illegal harms, so to speak. He is right to say that this is about helping humanity to move forward. It is absolutely right that we should be conscious of the risks, but I am also keen to support our start-ups, our innovative companies and our exciting tech economy to do what they do best and move society forward. That is why we have taken this pro-safety, pro-innovation approach; I repeat that safety in this field is an enabler of growth.
I would like to thank Sir Roger Gale, who has just left the Chair. He has been excellent in the Chair and I have very much enjoyed his company as well as his chairing.
I thank the Government for advance sight of the statement. My constituents and people across these islands are concerned about the increasing use of AI, not least because of the lack of regulation in place around it. I have specific questions in relation to the declarations and what is potentially coming down the line with regulation.
Who will own the data that is gathered? Who has responsibility for ensuring its safety? What is the Minister doing to ensure that regard is given to copyright and that intellectual property is protected for those people who have spent their time and energy and massive talents in creating information, research and artwork? What are the impacts of the use of AI on climate change? For example, it has been made clear that using this technology has an impact on the climate because of the intensive amounts of electricity that it uses. Are the Government considering that?
Will the Minister ensure that in any regulations that come forward there is a specific mention of AI harms for women and girls, particularly when it comes to deepfakes, and that they and other groups protected by the Equality Act 2010 are explicitly mentioned in any regulations or laws that come forward around AI? Lastly, we waited 30 years for an Online Safety Act. It took a very long time for us to get to the point of having regulation for online safety. Can the Minister make a commitment today that we will not have to wait so long for regulations, rather than declarations, in relation to AI?
The hon. Lady makes some interesting points. The thing about AI is not just the large language models, but the speed and power of the computer systems and the processing power behind them. She talks about climate change and other significant issues we face as humanity; that power to compute will be hugely important in predicting how climate change evolves and weather systems change. I am confident that AI will play a huge part in that.
AI does not recognise borders. That is why the international collaboration and these summits are so important. In Bletchley we had 28 countries, plus the European Union, sign the declaration. We had really good attendance at the Seoul summit as well, with some really world-leading declarations that will absolutely be important.
I refer the hon. Lady to my earlier comments around copyright. I recognise the issue is important because it is core to building trust in AI, and we will look at that. She will understand that I will not be making a commitment at the Dispatch Box today, for a number of reasons, but I am confident that we will get there. That is why our approach in the White Paper response has been well received by the tech industry and AI.
The hon. Lady started with a point about how constituents across the United Kingdom are worried about AI. That is why we all have to show leadership and reassure people that we are making advances on AI and doing it safely. That is why our AI Safety Institute was so important, and why the network of AI safety institutes that we have helped to advise on and worked with other countries on will be so important. In different countries there will be nuances regarding large language models and different things that they will be approaching—and sheer capability will be a huge factor.
I pay tribute to the Government for their approach on AI. The growth of AI, and its exponential impact, has really not yet landed with most people around the world. The scale and impact of that technology is truly once in a generation, if not once in history. Ensuring that we work around the world to harness that incredibly powerful force for good for humanity is vital. It is good to see the UK playing a leading role in that and, frankly, it is good to see a cross-party approach, because this is bigger than party politics. Will all those involved—the Minister, Lord Camrose, the Secretary of State and the Prime Minister—ensure that the agenda of empowering the development of AI and putting guardrails in place is absolutely at the centre not just of UK policy but of policy across the world?
I put on record my personal thanks to my right hon. Friend for all that he has done. We worked very closely together on the introduction of the integrated care board when he was Health Secretary, and it continues to be hugely beneficial to my constituents. He raises important points about the opportunities of AI and the building of trust, which I have also spoken about. However, he mentioned a “cross-party approach”. I am not sure that the Opposition are quite there yet in terms of their approach. I say to the Opposition that there is a great tech story in this country: we are now the third most valuable economy in the world, worth over $1 trillion; we have more unicorns than France, Germany and Sweden combined; we have created 1.9 million more jobs—over 22% more—than at pre-pandemic levels; and, as I have said, just over £2 billion of investment has come in just the last fortnight. We believe in British entrepreneurs, British innovation and British start-ups. The real question is: why do the Opposition not believe in Britain?
I welcome the Minister’s statement. He is right to say that many Members across the Chamber support the Government’s clear goals and objectives. The continued focus on the Bletchley declaration is to be welcomed, and I welcome the drive to prevent disinformation and other concerns. However, although information and practice sharing will be almost universal, we must retain the ability to prevent the censorship of positions that may not be popular but should not be censored, and ensure that cyber-security is a priority for us nationally, primarily followed by our international obligations.
The hon. Gentleman is absolutely right to say that AI will play a huge role in cyber-security. We recently launched our codes of practice for developers in the cyber-security field. AI will be the defining technology of the 21st century—it is hugely important—and his questions highlight exactly why we have taken this approach. We want our regulators, which are closest to their industries, to define and be on top of what is going on. That is why we have given them capacity-building funds and asked them to set out their plans, which they did at the end of April, and we will continue to work with them.
It sounds as if there was a fair bit of discussion at the summit about AI in healthcare, particularly on its use as a medical device. The Minister will know that it has great potential, and I heard his exchanges just a moment ago. To give just one example, AI can support but not replace clinicians in mammography readings. Does he agree that we must follow the strong lead of the US in this area by ensuring that the regulatory landscape is in the right place to assist this innovation, not get in the way of it?
My hon. Friend makes a hugely important point. I refer him to what I said earlier. It was insightful for me to see how transformative AI can be in health. When I visited Aival, for example, I gained insight into the complexity of installing AI as a testing bed for different machines depending on who has manufactured and calibrated them. The regulator will play a huge role, as he can imagine, whether on heart disease, radiotherapy, or DeepMind’s work in developing AlphaFold.
I congratulate the Minister on all his enthusiastic work on AI. In his statement, he referred to the frontier AI safety commitments, and 16 companies were mentioned. One of those was Zhipu AI of Tsinghua Daxue—Tsinghua University in China—which is, of course, one of the four new AI tigers of China. How important is the work that the Minister is doing to ensure China is kept in the tent when it comes to the safety and regulation of AI, so that we do not end up with balkanisation when it comes to AI?
My hon. Friend makes a really important point. I will not try to pronounce the name of that university or that company; what I will say is that AI does not recognise borders, so it is really important for China to be in the room, having those conversations. What those 16 companies signed up to was a world first, by the way: companies from the US, the United Arab Emirates, China and, of course, the UK signed that commitment. This is the first time that they have agreed in writing that they will not deploy or develop models that test the thresholds. Those thresholds will be divined at the AI action summit in France, so my hon. Friend is exactly right that we need a collaborative global approach.
I thank the Minister for his statement.
(6 months, 1 week ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
It is a pleasure to serve under your chairmanship, Sir George. I commend my hon. Friend the Member for Penistone and Stocksbridge (Miriam Cates) for securing this important debate. She is a passionate campaigner, and I thank her for her engagement on this issue on numerous occasions, including by coming to see me. I also thank the hon. Member for Rhondda (Sir Chris Bryant); worryingly, this is the second time in about 24 hours that I have found myself agreeing with him.
The hon. Gentleman calls on me to resign. Before he asks me to join him on his Benches, I should say that a space on our Benches recently became available, if he wants it. I found myself in considerable agreement with him.
I thank all hon. Members for their contributions. This is clearly a hugely complex issue. I want to start by stating that before being a Minister, I am a parent. I probably make my colleagues sick by talking about that constantly, but it is one of the most rewarding and fulfilling privileges of my lifetime. Being a parent is also one of the scariest things. I have to worry, as we all do, about whether they will grow up to be healthy, make friends at school and, now, whether they will be safe in the online world as well as in the offline world.
I also want my children to have a fulfilling childhood, to learn the skills of tomorrow while we protect them online. Therein lies the conundrum.
I will make a little progress. I want to focus on the issue of research and data. The UK chief medical officer, among others, has systematically reviewed the scientific evidence and concluded that an association between screen-based activities and poor mental health exists, but existing research does not yet prove a causal relationship. Other investigations, however, such as those by Professor Haidt, as mentioned by my hon. Friend the Member for Penistone and Stocksbridge, into the link between these technologies and mental health have suggested a harmful relationship. The scientific community is considering Professor Haidt’s findings, and we are watching that discussion with interest.
I want to reassure hon. Members that on research and causality, I am considering every option to ensure we leave no stone unturned. I will look at this very closely to ensure that any policies that come forward are based on science and data.
I thank my hon. Friend for his reassuring insistence that he will look into the data. The US Surgeon General, who recently visited Parliament, made the point that, if social media or smartphones were a drug, they would be immediately withdrawn from the market because of the harm they are reputed to cause. Even if the full causality is not as established as the Minister wants, is the evidence not so clear and the impact so harmful that it would be sensible to withdraw social media before conducting that research?
I thank my hon. Friend, who has made that point passionately, both here and in private. The important thing is to have the data to back such a significant conclusion, because social media also benefits young people and society and a balance has to be achieved.
I am going to make some progress. To be clear, that does not rule out taking a precautionary approach. We need to consider the impacts carefully before taking action. As the National Society for the Prevention of Cruelty to Children said before this debate, it is important to strike the right balance between protecting children from harm and allowing them to reap the benefits of safe internet use. We will continue to explore options in this space. I welcome further engagement, research and evidence in this area to inform our policies.
On those points, does the Minister agree that this is not just about addiction for some, but about dependency and harm for many? Artificial intelligence is only going to supercharge this. Does he agree that tech companies need to be held to account and ensure protections are in place, and that Ofcom needs to use the powers it has been given to force them to do that?
I thank my hon. Friend for that intervention. Let me say clearly that there is no reason why the tech companies could not have acted over the past few years. There is no reason for them to wait for Ofcom’s code of practice; they should be getting on with the job. I said that as a Back Bencher, and I mean it. The Online Safety Act is what we consider to be technology-agnostic. It covers a lot of the incidences of AI, but we obviously continue to monitor the situation.
I am so glad that my hon. Friend says he is looking at all options to keep children safe. On the issue of preventing children from being able to upload sexual content or from being groomed into uploading sexual images, will he look at the suggestion put to me by the national police lead and others of putting controls at systems level, so that a phone cannot upload that content when the upload is by a child?
I will limit further interventions due the time I have, but I will write to my right hon. Friend on that issue.
I will make some progress. We are aware of the ongoing debate regarding the age at which children should have a smartphone. We recognise the risks that technology such as smartphones pose, but I would argue that a ban would not necessarily achieve the outcome we wish. As has already been said, children can find ways through. We also have to consider who we are criminalising and how legislation would intervene in the lives of the private individual. We live in a digital age and many parents want their children to have a smartphone, as they provide benefits to children and parents, such as staying connected while travelling alone. In other words, trying to protect children from one harm may well lead to another. I speak to many parents who give me the other side of the argument, and I wanted to put that on the record.
The decision on whether a child should have access to a smartphone should not be one for Government. Instead, we should empower parents to make the right call for their children and their individual circumstances. In fact, parents as consumers can influence the market themselves. It is my belief that choice is a liberty that parents and children should be allowed to exercise.
I agree that online platforms must take responsibility for the harmful effects of the design of their services and their business models. That is why the Online Safety Act is a groundbreaking piece of legislation, which puts the onus on platforms to ensure that children are protected. I want to reassure parents that the legislation will change significantly how our children grow up in the online world. If social media companies do not do the right thing, we have given Ofcom the teeth to go after them—and I fully expect it to do so.
Children’s wellbeing is at the heart of the Act, and the strongest protections are for children. Under the new regulations, social media platforms will need to assess the risks of their services facilitating illegal content and activity. That includes illegal abuse, harassment or stirring up hatred. They will need to assess the risk of children being harmed on their services by content that does not cross the illegal threshold, but that is harmful to them, which is something that was brought up.
I will make some progress as I am very short on time, and I want to give my hon. Friend the Member for Penistone and Stocksbridge time to respond.
I want to be unequivocal here: the Online Safety Act ensures that the UK is the safest place to be online, requiring all companies to take robust action against illegal content. Last week, Ofcom published the draft codes of practice for the child safety rules. Those protections are a real game changer and will establish the foundation to protect generations to come. I commend Ofcom for its proposals. It rightly puts the onus on big tech to do the right thing and keep our children safe. I say this to big tech: with great reward comes great responsibility. They have that responsibility and they must act.
Part of the codes identify risks that children need to be protected from, and they also set out the requirement for platforms to implement highly effective age assurance technology to prevent children from encountering harmful content on their services, including pornography, and content that depicts serious violence or promotes serious self-harm, suicide and eating disorders.
Tackling suicide and self-harm material is a key objective of the Online Safety Act. We have heard too many stories of the devastating impact of that content, and I commend all the parents who have campaigned on the issue. They have gone through the most unimaginable, heartbreaking and heart-wrenching challenges. We continue to engage with them, and I commend them for their bravery. There is a live consultation on age assurance at the moment and I encourage all Members to engage with that.
My hon. Friend the Member for Redditch (Rachel Maclean) raised a number of key issues and I will write to her in response. She also talked about parental responsibility, which is important. I think she raised the issue of chat functions, which are also in the scope of the Online Safety Act. The hon. Member for Stirling (Alyn Smith) spoke about the tragic case of Murray Dowey. I offer my condolences to the parents and my open door; I would be more than happy to meet them with the hon. Member in attendance.
My hon. Friends the Members for Stoke-on-Trent North (Jonathan Gullis) and for Great Grimsby (Lia Nici) talked about the responsibility of the Department for Education. I am sure that has been heard, and I will continue to engage with Minsters. My right hon. Friend the Member for North East Hampshire (Mr Jayawardena) talked about his Nokia 3210. Nokia has started remarketing the 3210, so he should look forward to a Christmas present—not from me, but from someone who likes him. I wish him all the best with that.
My final comment is that I would be happy to meet my hon. Friend the Member for Penistone and Stocksbridge, as would the Secretary of State.
(8 months, 3 weeks ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I thank you for your excellent chairmanship, Mrs Harris, of this over-subscribed debate on an important topic. I thank the hon. Member for Ellesmere Port and Neston (Justin Madders) for securing the debate. I am grateful to him and other speakers for their insightful contributions. I am conscious of time, so I will be limiting the interventions I take, as I want to try to address as many of the issues that have been raised as I can.
Digital technologies offer extraordinary opportunities; if we take full advantage of them, we can grow our economy, create new jobs and improve lives for British people right across the country. They can have other benefits too, such as connecting communities, reducing loneliness and making public services easier and faster to access. All those points have been very well made today. Right now, though, too many people across the country cannot experience those benefits.
Digitally excluded people are less likely to be in well-paying jobs, and they have worse health outcomes and an overall lower quality of life. As a result, digital exclusion does not just create new inequalities, but exacerbates existing ones, making it more difficult to fully participate in society. That is why, even as we look towards investing in the transformative technologies of tomorrow, from AI to quantum, the Government remain resolutely committed to ensuring no one is left behind in today’s digital age. If Britain is to be a real science and tech superpower, our superpower status has got to deliver tangible benefits for every British person.
We are under no illusions: this is a difficult task that requires work right across Government to address the many complex barriers we face. That is why the 2022 digital strategy outlined work across Government that will promote digital inclusion, from accelerating the roll-out of gigabit broadband to delivering landmark legislation to make the UK the safest place in the world to be online. By doubling down on the four key principles we set out 10 years ago in the digital inclusion strategy—access, skills, motivation and trust—we believe we have the foundations in place to succeed. I will now take each of these principles in turn.
First, on access, we understand the importance of staying connected in the modern age. That is why we have prioritised access to fixed and mobile broadband, including wifi, affordable tariffs and access to suitable devices. To ensure everyone has the access they need, the Government introduced the broadband universal service obligation in 2020, which gives everyone the legal right to request a decent and affordable broadband connection of at least 10 megabits per second. To ensure the USO remains up to date, in October 2023 we launched a consultation to review the obligation and will be publishing a Government response later this year. In March 2021 we launched Project Gigabit, our £5 billion mission to deliver lightning-fast, reliable broadband to the hardest-to-reach parts of the UK, areas that would have otherwise been left out of commercial gigabit roll-out plans without Government subsidy.
Last week we announced that 1 million premises across the UK have received a gigabit-capable connection thanks to Government investment. The majority of these premises are in hard-to-reach locations where previously many people would have struggled to stream TV shows, access online services or run small businesses. I am happy to report that, as I am sure the hon. Member for Ellesmere Port and Neston already knows, his constituency benefits from excellent broadband connectivity. In his constituency, over 99% of premises can access a superfast connection, while 93% can access a gigabit-capable connection.
I thank the Minister for giving way. I am very envious of the hon. Member for Ellesmere Port and Neston (Justin Madders) for having such high levels of connectivity. Those of us who find ourselves in the Project Gigabit type C contract are now seeing that the voucher schemes have been turned off. Would the Minister agree that we need that procurement system to be speeded up so that we can all get to at least 99%?
I thank my hon. Friend for making that point and I will come on to some of the issues that she has raised; I am also happy to have a conversation with her about what support her community needs.
We know that, in addition to excellent coverage, we have competitive pricing in the UK. The cost of a gigabyte of data is 50p in the UK; that is less than half the average price in the EU, which is £1.18. We have also worked closely with the telecoms industry to ensure the availability and provision of low-cost, high-quality fixed and mobile social tariffs in the market. In total, 27 operators now offer social tariffs across 99% of the UK to those on universal credit and some other means-tested benefits.
We have seen social tariff take-up increase by almost 160% since September 2022. Although this represents just 8% of the total number of eligible households, progress is being made and we will continue to work with telecoms providers to increase awareness of this provision. We have also supported access to devices and wi-fi. Around 2,900 public libraries in England provide a trusted network of accessible locations with free wi-fi, which is funded by the Department for Culture, Media and Sport.
The Department for Education has also delivered over 1.95 million laptops and tablets to schools, trusts, local authorities and further education providers for disadvantaged children and young people since 2020. This is part of a £520 million Government investment to support access to remote education and online social care services. To support those seeking work, our Jobcentre Plus work coaches can provide support to eligible claimants who are not online, with financial support to buy six months’ worth of broadband connection. This scheme is administered by the Department for Work and Pensions through the flexible support fund, and I thank my right hon. Friend the Member for Suffolk Coastal (Dr Coffey), who did excellent work through the pandemic. I am sure that I must have written to her on behalf of my constituents during that very uncertain time, and I will certainly take away her points and ideas.
I will make some more progress, if that is okay.
That package, which includes free wi-fi, access to devices and affordable fixed and mobile tariffs for 99% of the UK, supports access to the digital products and services that are needed for modern life.
Now I turn to the issue of digital skills. As well as working to provide the right access, we are working to ensure that everyone has the right skills to be able to navigate their personal and professional lives. On a personal note, this is a particular passion of mine and something that I wholeheartedly believe in. My hon. Friend the Member for Derbyshire Dales (Miss Dines) mentioned digital skills in her contribution, as did other Members in theirs.
Digital skills are central to the jobs of today and the workforce of tomorrow. Ensuring that the workforce has the digital skills for the future will be crucial to meet the UK’s ambition to be a global science and tech superpower. We are supporting skills development at every level—or, as I like to say, at every age and at every stage.
The Department for Education supports adults with low digital skills through the digital entitlement, which fully funds adults to gain essential digital skills qualifications, based on the essential digital skills framework. Since the introduction of the digital entitlement in 2020, the Department has supported over 40,000 learners to study for a qualification in essential digital skills. We are working closely with the Department for Education, industry and academia through the digital and computing skills education taskforce, which was launched last summer to increase the numbers of students choosing digital and tech educational pathways into tech careers.
To inspire the next generation of tech professionals, we have also launched two initiatives: the Cyber Explorers platform for 11 to 14-year-olds, which has reached almost 60,000 students; and the CyberFirst Girls competition, which supported 12,500 12 and 13-year-old girls in 2023 alone.
The Department for Education also funds digital skills provision through Community learning, which is an important stepping stone for learners, particularly post-19 disadvantaged learners, who are not ready for formal accredited learning or who would benefit from learning in a more informal way.
In June 2022, the Government launched the Digital Skills Council, which I co-chair. It brings together Government and industry to strengthen the digital workforce. Last year, the Digital Skills Council partnered with FutureDotNow to fund the publication of the digital skills roadmap, which lays out collective commitments to ensure that all working-age adults have basic digital capabilities.
Finally, we are also supporting people to develop advanced skills in our priority technology areas. We have established the £30 million data science and artificial intelligence conversion programme course to broaden the supply of AI talent in the UK. It funds universities to develop masters level or data science courses suitable for non-STEM students and up to 2,600 scholarships for students from under-represented backgrounds. Just last week we launched a pilot advertising campaign designed to generate awareness of the benefits of learning advanced digital skills and to drive people towards a new website that has details on Government-funded digital skills bootcamps. These bootcamps are 16-week courses that are fully funded, with a guaranteed job interview at the end.
To support workers to understand and apply AI in their jobs, last year, in partnership with Innovate UK and the Alan Turing Institute, we published the first version of a new guidance document that helps businesses to identify what skills their non-technical workers need to be able to successfully use AI in the workplace.
The secondary barriers of trust and motivation, which I mentioned at the start, must be tackled to have a truly positive impact on digital inclusion, but those are harder to measure. We recognise that some people are hesitant to access online services because they fear they may become victims of fraud or that it is an unsafe environment for their personal data. We are taking a number of steps to improve the safety and trustworthiness of the online space, including through the Online Safety Act 2023. The Act will ensure that technology companies take more responsibility for the safety of their users online, particularly children. It is a major step in protecting UK citizens from the scourge of online scams. The motivation barrier requires influencing decision making and motivation at the individual level. That challenge is difficult to overcome and is best addressed through ensuring that access, skills and trust are in place, which is why those remain our focus. That is why we have supported work through libraries, charities and communities, including the digital lifeline fund, and why we continue to fund free public wi-fi in libraries across the UK.
There are many community-based initiatives at the local level, including work through libraries, as I have mentioned, and from the third sector, such as the National Digital Inclusion Network, run by the Good Things Foundation, which is a vital resource to many working in this space. The excellent work done by the Good Things Foundation, Age UK and others plays an important role in providing support with technology and the internet. Those charities supplement Government engagement by offering guides, training courses and volunteers to help people make the most of the internet.
I will address some of the issues raised around financial services. The Government recognise that digital payments play an incredibly important role for businesses and individuals, with many making payments faster, easier and cheaper. However, the Government also believe that all customers, wherever they live, should have appropriate access to banking and cash services. It is imperative that banks and building societies recognise the needs of all their customers, including those who still need to use in-person services. The Government legislated through the Financial Services and Markets Act 2023 to protect access to cash for individuals and businesses. The Act establishes the Financial Conduct Authority as the lead regulator and provides with it responsibility and powers to ensure that reasonable provision of cash withdrawal and deposit services is made, including free services for individuals.
The FCA recently consulted on proposals for its regulatory regime and expects to finalise its rules in the second half of the year. An alternative option to access everyday banking services can be made by telephone banking and via the Post Office or banking hubs. The Post Office allows personal and business customers to carry out everyday banking services at 11,500 Post Office branches across the UK, and banking hubs are a shared initiative that enables customers of participating banks to access cash and banking services in shared facilities.
The issue of local authorities was also raised. Digital inclusion interventions are included in a UK shared prosperity fund prospectus. That has allowed local authorities to allocate funding to digital inclusion interventions. That is because we know from key stakeholders that digital inclusion interventions work best when they are tailored to local needs and when support is provided in the community on an ongoing basis. I was surprised to learn of the issues raised by my hon. Friend the Member for Erewash (Maggie Throup), who spoke about the disparity in non-digital access and cost discrimination. I did check, and I know that her Labour-led council are the ones in charge of this matter. I hope they are listening to this, and realise and appreciate that this is a priority for Government and that it should be a priority for them, too.
My hon. Friends the Members for North Devon (Selaine Saxby) and for North Norfolk (Duncan Baker) raised some important points about the switchover from the public switched telephone network. There was a wonderful plug for the all-party parliamentary group that my hon. Friend the Member for North Devon runs, and I am sure that has been heard loud and clear. The fact is that the way that landlines work in the UK is changing. Communication providers, such as BT and Virgin Media, are upgrading their old analogue landline network—also known as the PSTN—to a new digital technology that carries voice calls over an internet connection, which is also known as Digital Voice. The decision to switch off the analogue landline network was made by the telecoms industry, and a transition to Digital Voice networks is an industry-led process, which is expected to conclude in 2025.
However, the Government were made aware of some serious shortcomings in how the telecoms industry managed the PSTN migration. As a result, the Technology Secretary convened a meeting in December 2023 with the UK’s leading telecoms providers to discuss ways to improve the protection of vulnerable households through the migration. In response, the major telecoms providers have now signed a charter committing to concrete measures to protect vulnerable households, particularly those using telecare alarms. That is a positive step, which we hope will ensure that safety continues to be at the heart of the nationwide switchover.
Let me turn to next steps. Digital skills permeate through every aspect of policy. I view it as part of a cross-Government agenda to integrate digital inclusion into all policy decisions, rather than a stand-alone issue. My hon. Friend the Member for St Ives (Derek Thomas) mentioned the cross-Whitehall ministerial group for loneliness; I can assure him that I attended a meeting last week. I chair the group on digital inclusion, and I will be addressing some of the issues that have been raised there. All Departments are considering the needs of people who are digitally excluded in their policymaking.
The ministerial group on digital inclusion first met in September. It discussed issues such as parking payments, website accessibility and device donation schemes. I am looking forward to hearing updates on those areas from my ministerial colleagues at our next meeting in three weeks’ time. Since our last discussion, the Department for Transport, which leads on the national parking platform, has already said that it expects the full features of the NPP to be available from late 2024, making parking simpler and less stressful. The group also agreed to undertake a departmental mapping exercise and to review the viability of each Department joining donation schemes. This work is an important step forward in our joint efforts to tackle digital inclusion, and I look forward to building on these conservations.
In closing, I again thank the hon. Member for Ellesmere Port and Neston for raising such an important issue. I am hopeful that we can work together. We are working hard on this issue across Government and we have made some credible steps to tackle it. As the digital transformation picks up pace, we know that there is more to do to ensure that no one is left behind in our digital age, but we are already rising to that challenge. Departments forming the cross-Whitehall ministerial group will work hand in hand across Government, as well as with industry and our partners in the third sector, to deliver the benefits of a better digital future for communities all over the country.
(8 months, 4 weeks ago)
Written StatementsIn July 2023, the Government launched a consultation in relation to internet domain name registries and domain name abuse. This consultation asked for views on the Government’s proposals for regulations defining prescribed practices and requirements, which are to be introduced following sections 19 to 21 of the Digital Economy Act 2010 coming into force. Specifically, the consultation asked for views from the relevant parties on the draft list of misuses and unfair uses of domain names in scope, and proposed principles which will underpin the prescribed dispute resolution procedure.
It is important we undertake this work to ensure that the UK will continue to meet international best practice on governance of country code top level domains in line with our key global trading partners and our future global trading commitments.
As outlined in a previous statement of 20 July 2023, the DEA 2010 sets out the Secretary of State for Science, Innovation and Technology’s powers of intervention in the event that any UK-related domain name registry fails to address serious, relevant abuses of their domain names, posing significant risk to the UK electronic communications networks and its users.
We received 39 responses to the consultation, which closed in August 2023. In November 2023, the Government published a summary of the responses received and have since been analysing the responses, consulting with technical and industry experts to develop our policy response.
We have today published the Government policy response to the consultation. A copy of both this document and the summary of responses will be placed in the Libraries of both Houses and published on gov.uk.
We will now set out in secondary legislation the list of misuses and unfair uses of domain names that registries in scope must take action to mitigate and deal with, alongside the registry’s arrangements for dealing with complaints in connection with the domain names in scope. This will provide additional certainty for UK users that appropriate procedures will continue to be in place to help address abuse of UK-related domain names.
[HCWS276]
(10 months ago)
Commons ChamberFirst, let me put on the record how pleased I was to see my hon. Friend the Member for Watford (Dean Russell) back in his place, having heard about his health issues. I say that not just because his parents are constituents of mine or because he was born and brought up in my constituency, but because he is a dear friend of mine.
I thank my hon. Friend for securing this debate and raising the important issue of AI scams and the use of AI to defraud or manipulate people. I assure him that the Government take the issue very seriously. Technology is a fast-moving landscape and the pace of recent developments in artificial intelligence exemplifies the challenge with which we are presented when it comes to protecting our society.
I will start by being very clear: safely deployed, AI will bring great benefits and promises to revolutionise our economy, society and everyday lives. That includes benefits for fraud prevention, on which we are working closely with the Home Office and other Departments across Government. Properly used, AI can and does form the heart of systems that manage risk, detect suspect activity and prevent millions of scam texts from reaching potential victims. However, as my hon. Friend rightly identified, AI also brings challenges. To reap the huge social and economic benefits of AI, we must manage the risk that it presents. To do so, and thereby maintain public trust in these technologies, is key to effectively developing, deploying and adopting AI.
In the long term, AI provides the means to enhance and upscale the ability of criminals to defraud. Lone individuals could be enabled to operate like an organised crime gang, conducting sophisticated, personalised fraud operations at scale, and my hon. Friend spoke eloquently about some of the risks of AI. The Government have taken a technology-neutral approach. The Online Safety Act 2023 will provide significant protections from online fraud, including where Al has been used to perpetrate a scam. More broadly, on the services it regulates, the Act will regulate AI-generated content in much the same way that it regulates content created by humans.
Under the Online Safety Act, all regulated services will be required to take proactive action to tackle fraud facilitated through user-generated content. I am conscious that my hon. Friend may have introduced a new phrase into the lexicon when he spoke of AI-assisted criminals. I am confident that the Online Safety Act will be key to tackling fraud when users share AI-generated content with other users. In addition, the Act will mandate an additional duty for the largest and most popular platforms to prevent fraudulent paid-for advertising appearing on their services. This represents a major step forward in ensuring that internet users are protected from scams.
The Government are taking broader action on fraud, beyond the Online Safety Act. In May 2023, the Home Office published a fraud strategy to address the threat of fraud. The strategy sets out an ambitious and radical plan for how the Government, law enforcement, regulators, industry and charities will work together to tackle fraud.
On the points raised by the hon. Member for Strangford (Jim Shannon), the Government are working with industry to remove the vulnerabilities that fraudsters exploit, with intelligence agencies to shut down fraudulent infrastructure, and with law enforcement to identify and bring the most harmful offenders to justice. We are also working with all our partners to ensure that the public have the advice and support that they need.
The fraud strategy set an ambitious target to cut fraud by 10% from 2019 levels, down to 3.3 million fraud incidents by the end of this Parliament. Crime survey data shows that we are currently at this target level, but we are not complacent and we continue to take action to drive down fraud. Our £100 million investment in law enforcement and the launch of a new national fraud squad will help to catch more fraudsters. We are working with industry to block fraud, including by stopping fraudsters exploiting calls and texts to target victims. We have already blocked more than 870 million scam texts from reaching the public, and the strategy will enable us to go much further.
Social media companies should carefully consider the legality of different types of data scraping and implement measures to protect against unlawful data scraping. They also have data protection obligations concerning third-party scraping from their websites, which we are strengthening in the Data Protection and Digital Information Bill. That Bill will hit rogue firms that hound people with nuisance calls with tougher fines. The maximum fine is currently £500,000; under the Bill, it will rise to 4% of global turnover or £17.5 million, whichever is greater, to better tackle rogue activities and punish those who pester people with unwanted calls and messages.
I thank the Minister for a comprehensive and detailed response to the hon. Member for Watford; it is quite encouraging. My intervention focused on the elderly and vulnerable—what can be done for those who fall specifically into that category?
It is a great honour to be intervened on by the hon. Gentleman, who makes an important point. The Government will be doing more awareness raising, which will be key. I am willing to work with the hon. Gentleman to ensure that we make progress, because it is a key target that we must achieve.
Consumers are further protected by the Privacy and Electronic Communications (EC Directive) Regulations 2003, which govern the rules for direct marketing by electronic means. Under these regulations, organisations must not send marketing texts, phone calls or emails to individuals without their specific prior consent. We are also strengthening these regulations, which means that anyone trying to contact someone with unwanted marketing communication calls can be fined if they could cause harm or disturbance to individuals, even if they go unanswered by victims.
Beyond legislation, the Home Office and the Prime Minister’s anti-fraud champion worked with leading online service providers to create an online fraud charter. The charter, which was launched in November last year, sets out voluntary commitments from some of the largest tech firms in the world to reduce fraud on their platforms and services and to raise best practice across the sector.
This includes commitments to improve the blocking of fraud at source, making reporting fraud easier for users and being more responsive in removing content and ads found to be fraudulent. The charter will also improve intelligence sharing and better educate users about the risk on platforms and services, in response to the point of the hon. Member for Strangford.
Public awareness is a key defence against all fraud, whether or not AI-enabled. As set out in the fraud strategy, we have been working with leading counter-fraud experts and wider industry to develop an eye-catching public comms campaign, which we anticipate going live next month. This will streamline fraud communications and help people spot and take action to avoid fraud.
None the less, it is important to remember that this work is taking place in a wider context. The UK is leading the way in ensuring that AI is developed in a responsible and safe way, allowing UK citizens to reap the benefits of this new technology, but be protected from its harms. In March last year, we published the AI regulation White Paper, which sets out principles for the responsible development of AI in the UK. These principles, such as safety and accountability, are at the heart of our approach to ensure the responsible development and use of AI.
The UK Government showed international leadership in this space when we hosted the world’s first major AI safety summit last year at Bletchley Park. This was a landmark event where we brought together a globally representative group of world leaders, businesses, academia and civil society to unite for crucial talks to explore and build consensus on collective international action, which would promote safety at the frontier of AI.
We recognise the concerns around AI models generating large volumes of content that is indistinguishable from human-generated pictures, voice recordings or videos. Enabling users and institutions to determine what media is real is a key part of tackling a wide range of AI risks, including fraud. My hon. Friend has brought forward the idea of labelling to make it clear when AI is used. The Government have a strong track record of supporting private sector innovation, including in this field. Innovations from the safety tech sector will play a central role in providing the technologies that online companies need to protect their users from harm and to shape a safer internet.
Beyond that, Government support measures provide a blueprint for supporting other solutions to keep users safe, such as championing research into the art of the possible, including via the annual UK Safety Tech sectoral analysis report, and driving innovative solutions via challenge funds in partnership with GCHQ and the Home Office.
DSIT has already published best practices relating to AI identifiers, which can aid the identification of AI-generated content, in the “Emerging processes for frontier AI safety” document, which is published ahead of the AI safety summit. In the light of that, DSIT continues to investigate the potential for detecting and labelling AI-generated content. That includes both assessing technical evidence on the feasibility of such detection and the levers that we have as policymakers to ensure that it is deployed in a beneficial way. More broadly, last year the Government announced £100 million to set up an expert taskforce to help the UK to adopt the next generation of safe AI—the very first of its kind. The taskforce has now become the AI Safety Institute, which is convening a new global network and facilitating collaboration across international partners, industry and civil society. The AI Safety Institute is engaging with leading AI companies that are collaboratively sharing access to their AI models for vital safety research.
We are making the UK the global centre of AI safety—a place where companies at the frontier know that the guardrails are in place for them to seize all the benefits of AI while mitigating the risks. As a result, the UK remains at the forefront of developing cutting-edge technologies to detect and mitigate online harms. UK firms already have a 25% market share in global safety tech sectors. AI creates new risks, but as I have set out it also has the potential to super-charge our response to tackling fraud and to make our everyday lives better. The Government are taking action across a range of areas to ensure that we manage the risks and capitalise on the benefits of these new technologies. I thank all Members who have spoken in the debate, and I again thank my hon. Friend the Member for Watford for introducing this debate on AI scams, which I assure him, and the House, are a Government priority.
Question put and agreed to.
(10 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I am conscious of time and of the broad range of this debate, but I will try to address as many issues as possible. I commend my hon. Friend the Member for Weston-super-Mare (John Penrose) for securing this important debate on preventing misinformation and disinformation in online filter bubbles, and for all his campaigning on the subject throughout the passage of the Online Safety Act. He has particularly engaged with me in the run-up to today’s well-versed debate, for which I thank hon. Members across the Chamber.
May I echo the sentiments expressed towards my hon. Friend the Member for Brigg and Goole (Andrew Percy)? I thank him for sharing his reflections. I was not going to say this today, but after the ceasefire vote I myself have faced a number of threats and a lot of abuse, so I have some personal reflections on the issue as well. I put on the record my invitation to Members across the House to share their experiences. I certainly will not hesitate to deal with social media companies where I see that they must do more. I know anecdotally, from speaking to colleagues, that it is so much worse for female Members. Across the House, we will not be intimidated in how we vote and how we behave, but clearly we are ever vigilant of the risk.
Since the crisis began, the Technology Secretary and I have already met with the large social media platforms X, TikTok, Meta, Snap and YouTube. My predecessor—my hon. Friend the Member for Sutton and Cheam (Paul Scully)—and the Technology Secretary also held a roundtable with groups from the Jewish community such as the Antisemitism Policy Trust. They also met Tell MAMA to discuss Muslim hate, which has been on the rise. I will not hesitate to reconvene those groups; I want to put that clearly on the record.
It is evident that more and more people are getting their news through social media platforms, which use algorithms. Through that technology, platform services can automatically select and promote content for many millions of users, tailored to them individually following automated analysis of their viewing habits. Many contributors to the debate have argued that the practice creates filter bubbles, where social media users’ initial biases are constantly reaffirmed with no counterbalance.
The practice can drive people to adopt extreme and divisive political viewpoints. This is a hugely complex area, not least because the creation of nudge factors in these echo chambers raises less the question of truth, but of how we can protect the free exchange of ideas and the democratisation of speech, of which the internet and social media have often been great drivers. There is obviously a balance to be achieved.
I did not know that you are a Man City fan, Sir Mark. I am a Manchester United fan. My hon. Friend the Member for Weston-super-Mare talked about fish tackle videos; as a tortured Manchester United fan, I get lots of videos from when times were good. I certainly hope that they return.
The Government are committed to preserving freedom of expression, both online and offline. It is vital that users are able to choose what content they want to view or engage with. At the same time, we agree that online platforms must take responsibility for the harmful effects of the design of their services and business models. Platforms need to prioritise user safety when designing their services to ensure that they are not being used for illegal activity and ensure that children are protected. That is the approach that drove our groundbreaking Online Safety Act.
I will move on to radicalisation, a subject that has come up quite a bit today. I commend my hon. Friend the Member for Folkestone and Hythe (Damian Collins) for his eloquent speech and his description of the journey of the Online Safety Act. Open engagement-driven algorithms have been designed by tech companies to maximise revenue by serving content that will best elicit user engagement. There is increasing evidence that the recommender algorithms amplify extreme material to increase user engagement and de-amplify more moderate speech.
Algorithmic promotion, another piece of online architecture, automatically nudges the user towards certain online choices. Many popular social media platforms use recommender algorithms, such as YouTube’s filter bubble. Critics argue that they present the user with overly homogeneous content based on interests, ideas and beliefs, creating extremist and terrorist echo chambers or rabbit holes. There are a multitude of features online that intensify and support the creation of those echo chambers, from closed or selective chat groups to unmoderated forums.
Research shows that individuals convicted of terrorist attacks rarely seek opposing information that challenges their beliefs. Without diverse views, online discussion groups grow increasingly partisan, personalised and compartmentalised. The polarisation of online debates can lead to an environment that is much more permissive of extremist views. That is why the Online Safety Act, which received Royal Assent at the end of October, focuses on safety by design. We are in the implementation phase, which comes under my remit; we await further evidence from the data that implementation will produce.
Under the new regulation, social media platforms will need to assess the risk of their services facilitating illegal content and activity such as illegal abuse, harassment or stirring up hatred. They will also need to assess the risk of children being harmed on their services by content that does not cross the threshold of illegality but is harmful to them, such as content that promotes suicide, self-harm or eating disorders.
Platforms will then need to take steps to mitigate the identified risks. Ofcom, the new online safety regulator, will set out in codes of practice the steps that providers can take to mitigate particular risks. The new safety duties apply across all areas of a service, including the way in which it is designed, used and operated. If aspects of a service’s design, such as the use of algorithms, exacerbate the risk that users will carry out illegal activity such as illegal abuse or harassment, the new duties could apply. Ofcom will set out the steps that providers can take to make their algorithms safer.
I am conscious of time, so I will move on to the responsibility around extremism. Beyond the duties to make their services safe by design and reduce risk in that way, the new regulation gives providers duties to implement systems and processes for filtering out and moderating content that could drive extremism. For example, under their illegal content duty, social media providers will need to put systems in place to seek out and remove content that encourages terrorism. They will need to do the same for abusive content that could incite hatred on the basis of characteristics such as race, religion or sexual orientation. They will also need to remove content in the form of state-sponsored or state-linked disinformation aimed at interfering with matters such as UK elections and political decision making, or other false information that is intended to cause harm.
Elections have come up quite a bit in this debate. The defending democracy taskforce, which has been instituted to protect our democracy, is meeting regularly and regular discussions are going on; it is cross-nation and cross-Government, and we certainly hope to share more information in the coming months. We absolutely recognise the responsibilities of Government to deal with the issue and the risks that arise from misinformation around elections. We are not shying away from this; we are leading on it across Government.
The idea put forward by my hon. Friend the Member for Weston-super-Mare has certainly been debated. He has spoken to me about it before, and I welcome the opportunity to have this debate. He was right to say that this is the start of the conversation—I accept that—and right to say that he may not yet have the right answer, but I am certainly open to further discussions with him to see whether there are avenues that we could look at.
I am very confident that the Online Safety Act, through its insistence on social media companies dealing with the issue and on holding social media companies to account on their terms and conditions, will be a vital factor. My focus will absolutely be on the implementation of the Act, because we know that that will go quite a long way.
We have given Ofcom, the new independent regulator, the power to require providers to change their algorithms and their service design where necessary to reduce the risk of users carrying out illegal activity or the risk of children being harmed. In overseeing the new framework, Ofcom will need to carry out its duties in a way that protects freedom of expression. We have also created a range of new transparency and freedom-of-expression duties for the major social media platforms; these will safeguard pluralism in public debate and give users more certainty about what they can expect online. As I have said, the Government take the issue incredibly seriously and will not hesitate to hold social media companies to account.
(10 months, 1 week ago)
General CommitteesI beg to move,
That the Committee has considered the draft Online Safety (List of Overseas Regulators) Regulations 2024.
It is a pleasure to serve under your chairmanship, Mr Betts. I put on the record my gratitude to hon. Members for their campaigning and collaboration throughout the passage of the Online Safety Act 2023 and their contribution to making the UK the safest place in the world to be online. The Government are working at pace to ensure that the Act is fully operational as quickly as possible. I am therefore pleased to debate this statutory instrument, which was laid before the House in draft on 28 November last year.
The draft instrument is one of several that will enable Ofcom’s implementation of the Act. It concerns Ofcom’s co-operation with and disclosure of information to overseas online safety regulators under section 114 of the Act. Given the global nature of the regulated service providers, it is vital that Ofcom can co-operate and share information with its regulatory counterparts in other jurisdictions to support co-ordinated international online safety regulation.
In certain circumstances, it may be appropriate for Ofcom to support overseas regulators in carrying out their regulatory functions. For example, it may be beneficial for Ofcom to share information that it holds to inform supervisory activity or an investigation being carried out by an overseas regulator. That could support successful enforcement action, which in turn could have direct or indirect benefits for UK users such as preventing malign actors from disseminating illegal content on regulated services.
International collaboration will also make online safety regulation more efficient. In carrying out regulatory oversight activity, Ofcom and its international counterparts will be able to gather extensive information about regulated service providers. In some instances, it may be more efficient for regulators to share information directly, where that information has already been collected by a counterpart regulator. International regulatory co-operation and co-ordination are likely to reduce the regulatory burden on both international regulators and regulated service providers.
Section 114 of the Act builds on the existing information gateways available to Ofcom under the Communications Act 2003 by permitting Ofcom to co-operate with an overseas regulator for specified purposes. It includes powers to disclose online safety information to a regulator
“for the purposes of…facilitating the exercise by the overseas regulator of any of that regulator’s online regulatory functions, or…criminal investigations or proceedings relating to a matter to which the overseas regulator’s online regulatory functions relate.”
The information gateway addresses a small legislative gap, because in the absence of section 114, Ofcom could not share information for those specified purposes. Under section 1(3) of the Communications Act, Ofcom can share information only where it is
“incidental or conducive to the carrying out”
of its functions, subject to the general restrictions on the disclosure of information under section 393 of that Act.
The draft regulations designate the overseas regulators with which Ofcom can co-operate and share information under section 114 of the Online Safety Act. It is important to note that Ofcom will retain discretion over whether to co-operate and share information with the overseas regulators specified. The regulations designate the following overseas regulators: Arcom in France, the Netherlands Authority for Consumers and Markets, the Federal Network Agency in Germany, the Media Commission in Ireland, the eSafety Commissioner in Australia, and the European Commission.
In compiling the list of specified overseas regulators, the Department has consulted Ofcom and carefully considered its operational needs and existing relationships with overseas regulators. That will mean that the designated regulators are those with which Ofcom will be able to share information in an efficient and mutually beneficial manner. We have also considered whether the overseas regulator is a designated regulator of a bespoke online safety regulatory framework, ensuring that any information sharing is proportionate.
Another important consideration is the protection of fundamental freedoms online. For that reason, we have considered whether the autonomy of the regulator is protected in law and whether the overseas regulator and the jurisdiction that empowers it uphold international human rights.
Ofcom is an organisation experienced in handling confidential and sensitive information obtained from the services that it regulates, and there are strong legislative safeguards and limitations on the disclosure of such material. Overseas regulators that receive any information from Ofcom may use it only for the purpose for which it is disclosed. They may not use it for another purpose, or further disclose it, without express permission from Ofcom, unless ordered by a court or tribunal. Ofcom must also comply with UK data protection law, and would need to show that the processing of any personal data was necessary for a lawful purpose.
There are six bodies on the list. Is it likely that the bodies listed will change, given that the world is rather a dynamic place? It seems quite a short list at the moment.
I can confirm that we will continually review the list and update it as appropriate, in consultation with Ofcom.
As a public body, Ofcom is required to act compatibly with the right to privacy under article 8 of the European convention on human rights. As I said to my hon. Friend, we will continue to review the list of designated regulators, particularly as new online safety regimes are developed and operationalised around the world. I commend the draft regulations to the Committee and open the matter for debate.
I thank hon. Members across the Committee for their contributions. I am grateful for this opportunity to debate the list of overseas regulators under the Online Safety Act. It is vital that we recognise the global nature of regulatory services and regulated service providers under the Act. The draft regulations will ensure that Ofcom can co-operate and share online safety information with specified overseas regulators where it is appropriate to do so. As I have set out, we will review whether it is desirable and appropriate to add further overseas regulators to the list on an ongoing basis, particularly as the new online safety regulations are developed and operationalised around the world.
May I put on the record a special thank you to the hon. Member for Rotherham for her contribution? I have followed her work since I have been in Parliament, and I know she is a champion in protecting children, especially in the online sphere. I would welcome the opportunity to work with her, and she raised a very interesting point. As I say, we will continue to review the list of regulators. I am certainly happy to have that conversation.
I also give special thanks to the hon. Member for Leeds East for sharing his constituent’s story. The intention has always been for this legislation to make the online world the safest place possible, especially in the UK, and international collaboration is key to that. My door remains open if there is anything further that he would like to discuss. Once again, I commend the draft regulations to the Committee.
Question put and agreed to.