(6 months, 3 weeks ago)
Written StatementsI am pleased to inform the House that myself and US Secretary of Commerce Gina Raimondo, on behalf of the UK and the US, signed a memorandum of understanding on 1 April.
The MOU will enable our AI Safety Institutes to work together to follow through on commitments made at the AI Safety Summit held last November at Bletchley Park. It will allow us to develop an interoperable programme of work and approach to AI safety research. This will help us achieve our shared objectives of ensuring the safe development and use of advanced AI.
Specifically, through this MOU, we intend to engage in the following joint activities:
Develop a shared approach to model evaluations, including the underpinning methodologies, infrastructures and processes.
Perform at least one joint testing exercise on a publicly accessible model.
Collaborate on AI safety technical research, to advance international scientific knowledge of frontier AI models and to facilitate sociotechnical policy alignment on AI safety and security.
Explore personnel exchanges between the UK and US AI Safety Institutes.
Share information with one another across the breadth of their activities, in accordance with national laws and regulations, and contracts.
The institutes are already working together to align their scientific approaches, and to accelerate and rapidly iterate robust suites of evaluations for AI models, systems and agents. This will put us in a good position to evaluate the next generation of advanced AI models.
I launched the AI Safety Institute at the AI Safety Summit at Bletchley Park in November last year, making it the first state-backed organisation focused on advanced AI for the public interest. At that time, we set out our ambition for the UK AI Safety Institute to advance the world’s knowledge of AI safety by carefully examining, evaluating, and testing new types of AI. We have delivered on this intention by developing the capabilities and capacity of our institute into a world-leading organisation. The AI Safety Institute is conducting the world’s first Government evaluations of advanced AI systems. We aim to push the frontier by developing state-of-the-art evaluations for safety-relevant capabilities and conducting fundamental AI safety research. The institute will share its progress with the world to facilitate an effective global response to the opportunities and risks of advanced AI.
This formal partnership provides the basis for further international AI safety co-operation. The UK and US AI Safety Institutes will work with other countries to promote AI safety, manage frontier AI risks, and develop linkages between countries on AI safety research. To achieve this, the institutes will work together to develop international standards for AI safety testing and other standards applicable to the development, deployment of use of frontier AI models. We will progress this international collaboration bilaterally and multilaterally in existing multilateral fora, including the upcoming AI Seoul Summit, to be co-hosted by the UK and the Republic of Korea next month.
In closing, I reaffirm this Government’s commitment to tackling the challenges posed by AI head-on. Through collaboration, innovation, and shared determination, we will continue to lead the way in ensuring a safer and more responsible AI landscape for generations to come.
[HCWS409]
(9 months ago)
Written StatementsToday, the Government are publishing our response to the consultation on the Artificial Intelligence (AI) Regulation White Paper: “A pro-innovation approach to AI regulation”.
The world is on the cusp of an extraordinary new era driven by advances in AI, which presents a once-in-a-generation opportunity for the British people to revolutionise our economy and transform public services for the better and to deliver real, tangible, long-term results for our country. The UK AI market is predicted to grow to over $1 trillion (USD) by 2035—unlocking everything from new skills and jobs to once unimaginable lifesaving treatments for cruel diseases like cancer and dementia. That is why I have made it my ambition for the UK to become the international standard bearer for the safe development and deployment of AI.
We have been working hard to make that ambition a reality, and our plan is working. Last year, we hosted the world’s first AI safety summit, bringing industry, academia and civil society together with 28 leading AI nations and the EU to agree the Bletchley declaration, thereby establishing a shared understanding of the opportunities and risks posed by frontier AI.
We were also the first Government in the world to formally publish our assessment of the capabilities and risks presented by advanced AI; and to bring together a powerful consortium of experts into our AI Safety Institute, committed to advancing AI safety in the public interest.
With the publication of our AI Regulation White Paper in March, we set out our initial steps to develop a pro-innovation AI regulatory framework. Instead of designing a complex new regulatory system from scratch, the White Paper proposed five key principles for existing UK regulators to follow and a central function to ensure the regime is coherent and streamlined and to identify regulatory gaps or confusing overlaps. Our approach must be agile so it can respond to the unprecedented speed of development, while also remaining robust enough in each sector to address the key concerns around potential societal harms, misuse risks and autonomy risk.
This common sense, pragmatic approach has been welcomed and endorsed both by the companies at the frontier of AI development and leading AI safety experts. Google DeepMind, Microsoft, OpenAI and Anthropic all supported the UK’s approach, as did Britain’s budding AI start-up scene, and many leading voices in academia and civil society such as the Centre for Long-Term Resilience and the Centre for the Governance of AI.
Next steps on establishing the rules for governing AI
Since we published the White Paper, we have moved quickly to implement the regulatory framework. We are pleased that a number of regulators have already taken steps in line with our framework such as the Information Commissioner’s Office, the Office for Nuclear Regulation and the Competition and Markets Authority.
We have taken steps to establish the central function to drive coherence in our regulatory approach across Government, starting by recruiting a new multidisciplinary team to conduct cross-sector assessment and monitoring to guard against existing and emerging risks in AI.
Further to this, we are strengthening the team working on AI within the Department for Science, Innovation and Technology across the newly established AI policy directorate and the AI Safety Institute. In recognition of the fact that AI has become central to the wider work of DSIT and Government, we will no longer maintain the branding of a separate “Office for AI”. Similarly, the Centre for Data Ethics and Innovation (CDEI) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its mission. The name highlights the directorate’s role in developing tools and techniques that enable responsible adoption of AI in the private and public sectors, in support of DSIT’s central mission.
In September we also announced the AI and digital hub—a pilot scheme for a brand-new advisory service run by expert regulators in the Digital Regulation Co-operation Forum. It will be laser-focused on helping companies get to grips with AI regulations so they can spend less time form-filling and more time getting their cutting-edge products from the lab on to the market and into British people’s lives.
Building on the feedback from the consultation, we are now focused on ensuring that regulators are prepared to face the new challenges and opportunities that AI can bring to their domains. This consultation response presents a plan to do just that. It sets out how we are building the right institutions and expertise to ensure that our regulation of AI keeps pace with the most pressing risks and can unlock the transformative benefits these technologies can offer.
To drive forward our plans to make Britain the safest and most innovative place to develop and deploy AI in the world, the consultation response announces over £100 million to support AI innovation and regulation. This includes a £10 million package to boost regulators’ AI capabilities, helping them develop practical tools to build the foundations of their AI expertise and ability to address risks in their domain.
We are also announcing a new commitment by UK Research and Innovation that future investments in AI research will be leveraged to support regulator skills and expertise. Further to this, we are announcing a nearly £90 million boost for AI research, including £80 million through the launch of nine new research hubs across the UK and a £9 million partnership with the US on responsible AI as part of our international science partnership fund. These hubs are based in locations across the country and will enable AI to evolve and tackle complex problems across applications, from healthcare treatments to power-efficient electronics.
In addition, we are announcing £2 million of Arts and Humanities Research Council (AHRC) funding to support research that will help to define responsible AI across sectors such as education, policing and creative industries.
In the coming months, we will formalise our regulator co-ordination activities by establishing a steering committee with Government representatives and key regulators. We will also be conducting targeted consultations on our cross-sectoral risk register and monitoring and evaluation framework from spring to make sure our approach is evidence-based and effective.
We are also taking steps to improve the transparency of this work, which is key to building public trust. To this end, we are also calling on regulators to publicly set out their approaches to AI in their domains by April 2024 to increase industry confidence and ensure the UK public can see how we are addressing the potential risks and benefits of AI across the economy.
Adapting to the challenges posed by highly capable general-purpose AI systems
The challenges posed by AI technologies will ultimately require legislative action across jurisdictions, once understanding of risk has matured. However, legislating too soon could stifle innovation, place undue burdens on businesses, and shackle us from being able to fully realise the enormous benefits AI technologies can bring. Furthermore, our principles-based approach has the benefit of being agile and adaptable, allowing us to keep pace with this fast-moving technology.
That is why we established the AI Safety Institute (AISI) to conduct safety evaluations on advanced AI systems, drive foundational safety research, and lead a global coalition of AI safety initiatives. These insights will ensure the UK responds effectively and proportionately to potential frontier risks.
Beyond this, the AISI has built a partnership network of over 20 leading organisations, allowing AISI to act as a hub, galvanising safety work in companies and academia; Professor Yoshua Bengio, as chair, is leading the UK’s international scientific report on advanced AI safety, which brings together 30 countries, including the EU and UN; and the AISI is continuing its regular engagement with leading AI companies that signed up to the Bletchley declaration.
In the consultation response, we build on our pro-innovation framework and pro-safety actions by setting out our early thinking on future targeted, binding requirements on the developers of highly capable general-purpose AI systems. The consultation response also sets out the key questions and considerations we will be exploring with experts and international partners as we continue to develop our approach to the regulation of the most advanced AI systems.
Driving the global conversation on AI governance
Building on the historic agreements reached at the AI safety summit, today we also set out our broader plans regarding how the UK will continue to drive the global debate on the governance of AI.
Beyond our work through the AI Safety Institute, this includes taking a leading role in multilateral AI initiatives such as the G7, OECD and the UN, and deepening bilateral relationships building on the success of agreements with the US, Japan, Republic of Korea and Singapore.
This response paper is another step forward for the UK’s ambitions to lead in the safe development and deployment of AI. The full text of the White Paper consultation response can be found on gov.uk.
[HCWS247]
(11 months, 1 week ago)
Written StatementsOn Monday 4 December the UK and EU will sign our bespoke new agreement finalising the UK’s association to the Horizon Europe and Copernicus programmes. This deal is set to create and support thousands of new jobs as part of the next generation of research talent. It will help deliver the Prime Minister’s ambition to grow the economy and cement the UK as a science and technology superpower by 2030.
As part of the new deal negotiated over the last six months, the Prime Minister secured improved financial terms of association to Horizon Europe that are right for the UK—increasing the benefits to UK scientists, value for money for the UK taxpayer. It ensures:
UK taxpayers will not pay for the time where UK researchers have been excluded since 2021, with costs starting from January 2024.
The UK will have a new automatic clawback that protects the UK as participation recovers from the effects of the last two and a half years. It means the UK will be compensated should UK scientists receive significantly less money than the UK puts into the programme. This was not the case under the original terms of association.
Later today we expect UK and EU representatives to meet in the format of the Specialised Committee on Participation in Union Programmes, where they are due to sign a decision to adopt Protocols I and II and amend Annex 47 of the Trade and Co-operation Agreement, thereby formalising the UK’s association to Horizon Europe and Copernicus.
I will meet in Brussels with EU Research and Innovation Commissioner lliana Ivanova and members of the UK and EU R&D sectors to discuss and promote efforts to boost UK participation in Horizon Europe and Copernicus.
My visit to Brussels marks the start of joint UK-EU work to ensure that UK businesses and researchers and their international counterparts come together and seize the opportunity that UK association to the programmes brings.
Researchers, academics, and businesses of all sizes can confidently bid for a share of the more than £80 billion available through the two programmes, with calls for the 2024 work programme already open. It builds on the Government’s record-breaking backing for R&D, with a commitment to invest £20 billion in UK R&D by 2024-25, borne out in recent announcements like the £500 million boost to the AI research resource and £50 million for battery manufacturing R&D, announced in the autumn statement.
DSIT will shortly launch a communications campaign to maximise participation in Horizon Europe and Copernicus from researchers, academics and businesses of all sizes in the UK. Encouraging smaller businesses to pitch for, and win, Horizon and Copernicus funding supports DSIT’s aim to help the UK’s promising science and tech firms scale-up and grow. Officials will work closely with key sector stakeholders to ensure this message reaches businesses of all kinds, who might not have previously considered applying, as well as researchers and academics in every part of the country.
[HCWS88]
(11 months, 3 weeks ago)
Written StatementsThe Government are working at pace to ensure that the new online safety regulatory framework set out in the Online Safety Act 2023 is fully operational as quickly as possible. Today we are launching the first consultation related to the Act on the eligible entity criteria and procedure to be used for the super-complaints regime.
Super-complaints will play an essential role within the framework as they will allow for complaints about systemic issues to be assessed by the regulator. They will ensure that Ofcom is made aware of systemic issues users are facing which it may not be aware of otherwise.
To support the implementation of the super-complaints regime, the Secretary of State has the power to make regulations setting out the criteria a body must meet in order to be eligible to submit a super-complaint to Ofcom: the eligible entity criteria. The Secretary of State is further required to make provisions about procedural matters related to super-complaints. This can include requirements such as the form and manner of such a complaint, steps that Ofcom must take in relation to it, and time limits for each step.
In developing these regulations, the Government view is that the eligible entity criteria should allow systemic issues to be raised effectively, while ensuring that super-complaints are high quality and evidence-based. This approach will focus Ofcom’s resource on genuine problems. Similarly, we are seeking to create a super-complaints procedure which is clear, fair and effective.
In order to ensure that the regulations are informed by the expertise of civil society organisations and the wider public, a full consultation has been launched today and will run for eight weeks. We welcome responses to any, or all, of the proposed questions.
The consultation will use the Qualtrics survey tool. For those unable to access the tool, a PDF copy of the survey can be found at: Online Safety Act - Super-Complaints Consultation. A copy of the consultation document will be placed in the Libraries of both Houses and published on gov.uk.
[HCWS37]
(12 months ago)
Commons ChamberWith permission, Madam Deputy Speaker, I shall make a statement about the Government’s artificial intelligence safety summit.
Today I update the House about a turning point in our history. With 1% of the world’s population, we have built the third largest AI sector. We have rocketed ourselves to a 688% increase in AI companies basing themselves here in less than a decade, and UK AI scale-ups are raising almost double that of France, Germany and the rest of Europe combined. But the sudden and unprecedented growth in the speed and power of artificial intelligence presents unlimited opportunities along with the potential of grave risks, which we cannot ignore.
I truly believe that we stand at a crossroads in human history. To turn the wrong way would be a monumental missed opportunity for mankind, which is why last week presented such a watershed moment. We convened leaders, Ministers, developers, scientists and academics from across the globe to discuss for the first time the risks and opportunities of frontier AI. Although the collection of countries and organisations that came to Bletchley Park was unprecedented, our goal from the start was to leave with tangible outcomes. Let me briefly outline a handful of actions that have resulted from the summit.
First, 28 countries and the European Union, representing the majority of the world’s population, signed up to an unprecedented agreement known as the Bletchley declaration. Despite some claiming that such a declaration would be rejected by many countries in attendance, we agreed that, for the good of all, AI should be designed, developed, deployed and used in a manner that is safe, human-centric, trustworthy and responsible. We agreed on the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection.
We also agreed to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge—in particular to prevent misuse and issues of control, and the amplification of other risks—and that Turing prize winner Yoshua Bengio, credited as being one of the godfathers of AI, would lead on a state of science report to ensure that, collectively, we stay on top of the risks of frontier AI.
Countries with differing world views and interests, including China, signed the same agreement. Some had said that China would not come to the summit, but it did. They said that, if it does attend, China would never sign an agreement, but it did. Then they said that if China did sign the agreement, it would not agree to continue collaborating in the long term—but it did that as well. That alone would have made the summit a watershed moment in the history of AI safety, but we went further.
We surpassed all expectations by securing an agreement on Government-led testing pre-deployment of the models. This is truly a game changer to help ensure that we can safely harness the benefits of frontier AI while mitigating the risks. To facilitate it, the UK announced that the world-leading frontier taskforce will morph into the world’s first permanent AI safety institute, which will bring together the very best AI minds in the world to research future risks and conduct third-party testing of models.
This is just the start of the journey on AI safety, which is why we have also confirmed funding for the institute for the rest of the decade and secured future AI safety summits to be held in the Republic of Korea in six months’ time and in France in one year’s time, ensuring that the extraordinary pace of international action set by the summit last week is maintained into the future.
None the less, the summit is just one piece in the UK’s overall approach to AI safety. Our White Paper published earlier this year was praised for ensuring that the UK can be agile and responsive as risks emerge. I am sure that Opposition Members will call for a one-size-fits-all “snapshot in time” piece of legislation, but we must ensure that we deepen our understanding of the problem before we rush to produce inadequate legislation.
We also need to ensure that we are quick enough to act, which is why we have taken the steps to ensure that we can keep pace with the development of the technology, with the next set of models being released within six months. AI is the fastest emerging technology that we have ever seen, and we need a system that can identify, evaluate and understand AI to then allow us to mitigate the risks with the right guardrails. That is why it is such an achievement to agree the pre-deployment testing of models; we should not underestimate that achievement.
Companies need to do more too, which is why before the summit we managed to go further than any country ever has. We secured the publication of the main AI companies’ safety policies, along with a catalogue of the possible policies, ensuring transparency and a race to the top, complemented by the recent US executive order. It is also why I have been advocating for responsible capability scaling, which I often refer to as a kind of smoke alarm for AI developers.
The release of ChatGPT not even a year ago was a breakthrough moment for humanity. We were all surprised by the progress. We saw the acceleration of investment into, and adoption of, AI systems at the frontier, making them increasingly powerful and consequential to our lives. These systems could turbocharge our public services, saving lives in the NHS and tailoring education to every child’s needs. They could free people everywhere from tedious work and amplify our creative abilities. They could help our scientists to unlock bold new discoveries, opening the door to a world where one day diseases such as cancer will no longer exist and there will be access to near-limitless clean energy.
But these systems could also further concentrate unaccountable power in the hands of a few, or be maliciously used to undermine societal trust, erode public safety or threaten international security. The British people deserve to know that those who represent them in this place are driving forward the right guardrails and governance for the safe development and deployment of frontier AI systems. I firmly believe that it cannot be left to chance or private actors alone, nor is it an issue for party political squabbling or point scoring.
As we stand here today, what was once considered science fiction is quickly becoming science fact. Just a few years ago, the most advanced AI systems could barely write coherent sentences. Now they can write poetry, help doctors to detect cancer, and generate photo-realistic images in a split second, but with those incredible advances come potentially grave risks, and we refuse to bury our head in the sand. We cannot ignore or dismiss the countless experts who tell us plain and simple that there are risks of humans losing control, that some model outputs could become completely unpredictable, and that the societal impacts of AI advances could seriously disrupt safety and security here at home.
Countries entered the summit with diverse and conflicting views of the world. Some speculated that a deal between the countries invited would be impossible, but what we achieved in just two days at Bletchley Park will be remembered as the moment that the world came together to begin solving an unprecedented global challenge. An international approach is not just preferable, but absolutely essential. Some Members understandably questioned the decision to invite China to the summit, and I do not dismiss the very real concerns and grievances that many on both sides of the House might have had, but a Government who represent the British people must ultimately do what is right for the British people, especially when it comes to keeping them safe. I am firm that it was the right decision for the country in the long term. There simply cannot be a substantive conversation about AI without involving the world’s leading AI nations, and China is currently second in the world in AI.
AI is not some phenomenon that is happening to us; it is a force that we have the power to shape and direct. I believe that we have a responsibility—and, in fact, a duty—to act and to act now. I conclude by taking us back to the beginning: 73 years ago, Alan Turing dared to ask whether computers would one day think. From his vantage point at the dawn of the field, he observed that
“we can only see a short distance ahead, but we can see plenty there that needs to be done.”
For us in this place, there is indeed plenty that needs to be done, but we cannot do it in isolation, so I urge Members across the House to adopt the collaborative, constructive approach that the international community displayed at Bletchley last week. If we in this place put our differences aside on this issue and work pragmatically on behalf of the British people, this new era of artificial intelligence can truly benefit every person and community across the country and beyond. Our summit was a successful step forward, but we are only just getting started. I commend this statement to the House.
I thank the Secretary of State for advance sight of her statement. As we have heard, the opportunities of AI are almost endless. It has the potential to transform the world and deliver life-changing benefits for working people. From delivering earlier cancer diagnoses to relieving traffic congestion or providing personalised tuition to children, AI can be a force for good. It is already having a positive impact in the present: in NHS hospitals such as the Huddersfield Royal Infirmary, AI is being used to help patients, cut waiting lists and save lives. The Labour party wants that technology to be available in every hospital with our fit for the future fund.
However, to secure those benefits we must get on top of the risks and we must build public trust. We welcome the announcements made last week at Bletchley Park. The future summits in South Korea and France will hopefully lead to more agreement between nations about how we make this new technology work for everyone. The AI safety institute will play an important role in making this new technology safe. Labour supports its creation, but we do have some questions. It would be good to hear the Secretary of State explain why the new institute is not keeping the function of identifying new uses for AI in the public sector. As the institute is taking all the AI expertise from the taskforce, it is also unclear who in her Department will carry out the crucial role of identifying how the public sector can benefit from cutting-edge technology.
There are also questions about UK computer capability. The AI safety institute policy paper states:
“Running evaluations and advancing safety research will also depend on access to compute.”
Yet earlier this year, the Government had less computing power than Finland and Italy. Can the Secretary of State update the House on how much of the AI research resource to which the institute will get priority access is available and operational?
Of course, the main task of the institute is to understand the risks of the most advanced current AI capabilities and any future developments. The Prime Minister told the public two weeks ago that,
“AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale. Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.”
Those are stark warnings and demand urgent action from any Government. Keeping the public safe is the first duty of Government. Yet Ministers have chosen not to bring forward any legislation on the most advanced AI. All the commitments that have been made are voluntary, and that creates problems.
For example, if a new company is established with advanced capabilities, how will it be compelled to join the voluntary scheme? What if a company decides it does not want to co-operate any more? Is there a mechanism to stop that happening? The stakes are too high for those questions to remain open, so I look forward to the Secretary of State’s being able to offer us more detail.
There was a space for a Bill on pedicabs in London in the King’s Speech this year, but not for one on frontier AI. Other countries, such as the US, have moved ahead with mandatory regulation for safety and security. It is confusing for the public to hear a Prime Minister on the one hand tell the country that there are dangers to our way of life from AI, but on the other hand say that his Government are in no rush to regulate.
Labour has called for the introduction of binding regulation on those companies developing the most powerful frontier AI because, for us, the security of the British people will always come first. I hope that the Government will now consider taking action and I look forward to the Secretary of State’s response to these points.
I agree with the hon. Gentleman on the importance of building trust among the public, which will also ensure the adoption of AI. In relation to ensuring that we deploy AI throughout our public services, it was this Government who just the other week announced £100 million to accelerate AI in our health missions, and more than £2 million to assist our teachers to spend less time with paperwork and administration and more time in the classroom. We will continue to work hand in hand with the Cabinet Office to ensure that we utilise AI in our public services, but to be able to do that, we must of course grip the risk, which is exactly why we called the summit.
On computing, the hon. Member will be only too aware that the Chancellor of the Exchequer announced earlier this year £900 million for an exascale programme, which we have allocated in Edinburgh. We have also dedicated £300 million—triple the original amount announced—to AI research resource facilities in Cambridge and Bristol, the first of which will come on stream this year.
The hon. Member also referenced the risk document that we published. We were the first Government in the world to be fully transparent with the British public, showcasing the risks that AI could present. That document was produced by scientists and our national security teams.
The hon. Member referenced legislation and regulation. It is not true that we have no regulation; in fact, we have multiple regulators. In the White Paper that we published earlier this year, we set out the principles that they need to work to. We should not minimise what we achieved just last week: that agreement to do testing pre-deployment is monumental. It is—absolutely—the start of a process, not the end. We could have waited and said, “Let’s just do our own piece of legislation,” which would have taken about a year, as he knows, but we do not have a year to wait, because the next set of models will come out with six months. We also need to deepen our understanding of the risks before we rush to legislate, because we believe that we need to better understand the problems before we insert long-term fixed solutions.
We need to concentrate on putting the safety of the British public first, which is what we have done, so that we can seize the limitless opportunities of AI. I hope that the hon. Member will see the foresight that this Government have had in putting that not just on the British agenda but on the agenda of the world.
I call the Chair of the Science and Technology Committee.
May I congratulate the Government on convening the summit and on its success? It is, as the Secretary of State said, a considerable achievement to get the US, the EU and China to agree a communiqué. It was good to have access to the frontier models that the summit agreed. Having future summits, in six months’ time, is also an important step forward.
As the Secretary of State said, the summit focused principally on frontier AI, but it is vital that we can deal with the here-and-now risks of the AI being deployed already. In the White Paper that they published in March, the Government said that they expected to legislate to have regulators pay
“due regard to the principles”
of that White Paper, but such a Bill was missing from the King’s Speech. Meanwhile, in the US, a very extensive executive order has been issued, and the EU is finalising its Artificial Intelligence Act.
Will the Secretary of State think again, in publishing the response to the White Paper, about taking this final opportunity before a general election to ensure that the good intentions and practice of the Government are not inadvertently left behind, with other jurisdictions’ legislation preceding our own and other people setting the rules rather than the United Kingdom setting a framework for the world?
I thank my right hon. Friend for his important question. I think it is right that we do not rush to legislate, because we need to understand properly the risks that we are facing. That is why we have been investing in bringing on board the correct experts, both into Government and into the taskforce that will now morph into the institute. It is why we have also committed not just ourselves but our international partners to producing the “state of the science” reports, so that we can stay up to date with those risks.
Absolutely, we will eventually have to legislate, but as we said in the White Paper that we published earlier this year, we do not need to rush to do that; we need to get the timing right to ensure that we have the right solutions to match those problems. There is a lot that we can do without legislation. We demonstrated that last week by convening the world for collective action to secure pre-model deployment testing, to ensure that we work together to get a better handle on the risks, and to encourage partners such as America to go further, on which we have seen us and them acting in lockstep.
I thank the Secretary of State for advance sight of her statement. The Bletchley declaration provides a baseline and is useful as a starting point, but it will be ongoing engagement that counts as we develop our understanding of the opportunities and threats that AI presents.
I was very taken by the Secretary of State saying that this was not an opportunity for party political point scoring. In that vein, on reflection, does she share my disappointment that the UK Government seemed to actively take steps to exclude the involvement of the devolved Administrations from around these islands from participation in the summit? Any claim that the UK might have to global leadership in AI rests in large part on the work that goes on in all parts of these islands, particularly from a legal, ethical, regulatory and technological perspective. It would have been very valuable had the other Governments that exist on these islands had the opportunity to fully participate in the summit.
While the declaration is a useful starting point, it is the future work on this that will count, so may I have an assurance from the Secretary of State that the UK Government will not seek to curtail again the involvement of devolved Administrations around these islands in future national and international discussions on these matters?
I met my counterpart—and my counterpart from Wales—just days before the summit, but as the hon. Member will appreciate, AI is not a devolved matter, and the people of Scotland were represented by the UK Government.
I thank the Secretary of State for her statement and the frontier taskforce for all the work it has done to produce an important global moment not dissimilar to the COP process. My question is about the AI safety team. In Lancashire we have the National Cyber Force centre coming in Samlesbury, and there is already a big skills base in the region, with GCHQ in Manchester. Can she update me and my constituents on how AI safety will get fed into our national security and how she will work with the National Cyber Force centre?
I know that my hon. Friend is a passionate advocate of cyber-security, which is one key area that we delved into at the summit. It is incredibly important that we maintain cyber-security throughout not just our Government and public services but our businesses, which is why we have been prioritising the area in the UK. I continue to talk to my hon. Friend and other Members about this work.
I thank the Secretary of State for her statement. On her reference to poetry, may I remind her that AI creates nothing? It generates a facsimile of text, but it does not create poetry. On the 400th anniversary of the first folio, that can only be done by this quintessence of dust that we are.
On that point, why were the creative industries excluded from the AI summit, when the Secretary of State knows how bitterly disappointed they were not to be included and how profoundly existential this whole issue is for the creative industries—one of the most successful and fastest growing sectors of our economy? Instead, they have been offered the sop of a side roundtable in the future, which the platforms are not even attending. Will the Secretary of State think again about the importance of including our excellent creative industries in every discussion that the Government have about the future of artificial intelligence?
Because the summit was only two days and was focused on a strategic conversation about frontier and the risks and opportunities, not everybody could be engaged and attend. We had an extensive programme called the road to the summit, where several roundtables were held with the creative industries, and both the Minister for Data and Digital Infrastructure and I attended. The Secretary of State for Culture, Media and Sport led some roundtables as well. We are currently working on a code of practice, bringing together the creative sector and the AI sector, to identify and come up with some of the solutions in this area.
It was a great pleasure to join the Secretary of State, the Prime Minister, and business and Government leaders from around the world last week at the AI safety summit. Does the Secretary of State agree that Milton Keynes showcased that it was an excellent place to not only hold global events, but to invest in technologies such as AI and robotics?
I absolutely agree with my hon. Friend. I could not have thought of a better place to host this international summit than Bletchley Park. It is not just me who thinks so: all of our delegates remarked on how important it was to host it at such a historically significant venue, one so close to the vibrant tech capital of Milton Keynes.
The Bletchley Park declaration is indeed to be welcomed. Given the more or less consensual response to the Secretary of State’s statement, it strikes me that taking this issue forward on a cross-party basis is going to be absolutely crucial. There was no mention of legislation in the King’s Speech, and although I partially accept the Secretary of State’s point about the time involved in legislating, Governments of all colours come and go, but this issue transcends those changes. Can we get an undertaking from the Secretary of State that there will be discussions right across the Chamber involving all the parties about where she sees things going and what legislation may have to be looked at in the future, in order to give continuity?
I am more than happy to talk to anybody from around the House, and to convene a meeting with colleagues of all colours to discuss this important area and what the future may hold in terms of responses and action.
It was in 1993 that the world wide web first became accessible to the public, and 30 years on, the world is still grappling with how to regulate and legislate for this industry. I am pleased that we heard from the summit that we are going to have proactive model checking, but I agree with the Chair of the Select Committee, my right hon. Friend the Member for Tunbridge Wells (Greg Clark), that much of this AI technology is already out there—that is the problem. How quickly will the safety institute be set up, and most importantly, how quickly will we see tangible results? Can we learn lessons from the vaccine, and from the Medicines and Healthcare products Regulatory Agency on how legislation and regulation can run alongside innovation in this sector?
In fact, we are learning some of those lessons, because the taskforce itself was modelled on our world-leading vaccine taskforce. As to when the institute will be set up, to all intents and purposes it has already been set up, because it is the next chapter—the evolution—of the existing taskforce. That taskforce has already done research on safety, and has demonstrated to delegates at the summit the full potential of the risks that could be apparent. It has already begun testing those models, and I can assure this House that there will be pre-deployment testing of the models that are going to come out within the next six months.
The first folio has been quoted. I would like to quote a more recent famous science fiction series: one Commander Adama, who said,
“You cannot play God then wash your hands of the things that you’ve created.”
I absolutely agree that there are huge opportunities in AI, but we have already heard about the huge risks. The Secretary of State says that we should not rush to legislation, but the truth is that we have often lagged behind in this area—for example, in regulating social media—and we see others moving ahead, including the United States, as we have heard. The EU is also planning legislation by the end of the year. If we are not having legislation, can the Secretary of State at least assure us that an urgent assessment is being made of how hostile states are already weaponising AI for military and other purposes, including information, cyber and hybrid warfare, but also in the chemical, biological, radiological and nuclear spheres? Some hugely worrying stuff is happening out there. Are we urgently assessing it, and deciding how we will respond and defend this country?
Let me pull up the hon. Member on one comment he made, which was about us lagging behind on legislation for social media. We are in fact leading the world with the world’s most comprehensive Bill—now Act—in that area. On the misuse of AI, this is one of the three pillars of risk that we discussed at the summit. The risk documents that we published just before the summit highlighted the fact that AI can amplify existing risks. There are already risks presented by the internet and other technologies in relation to biochemical warfare—they are present today and we are dealing with them. This could potentially amplify that, and we have certainly both talked about that internationally and are working on it domestically. We will be coming back to our White Paper within the year.
Historically, every revolution at a time of technology leads to threats of job losses—people not having opportunities to work, which is dreadful for people’s lives. However, here we are today with almost full employment in the UK, and there are opportunities for AI to increase that, as well as to make people’s lives easier, improve employment prospects and, indeed, conquer diseases. Will my right hon. Friend set out some of the advantages for the average individual of harnessing artificial intelligence for the benefit of all humankind?
The opportunities from AI are limitless, and they can transform our public services. In fact, that is already happening. We see our doctors detecting cancer earlier, and we see us utilising the technology to try to tackle things such as climate change more quickly. In relation to jobs, my hon. Friend is quite right that AI, like any technology, will change the labour market. If we look back to 1940, we see that 60% of jobs we have now did not actually exist back then. AI will create new jobs, and jobs we cannot even think of, but it will also complement our jobs now, allowing us more time to do the bits of our jobs we actually train to do—for example, assisting teachers to have more time in the classroom and doctors to have more time with patients.
During the covid pandemic, one of my greatest concerns was the over-reliance on and the promotion of lateral flow devices as a gold standard, as it were, of testing and surveillance. It was all the more frustrating because, during that time, I was aware of domestic businesses that were developing AI models to surveil not just covid, but other viruses, such as Ebola and dengue fever. I had a recent very constructive meeting with Health, which is now on board with AI and looking at domestic diagnostics. Will the Secretary of State meet me to discuss how these businesses can be brought forward as part of a co-ordinated strategy to develop AI testing and prepare effectively for any future pandemic?
Sorry, but not only do I not buy the Secretary of State’s excuses for not including devolved Governments in the summit when my constituency alone is bursting with leading fintech, cyber-security and creative organisations, but I do not buy her excuses for not introducing regulation more rapidly. She has said herself that the game-changing ChatGPT was introduced a year or so ago, and the EU is rapidly approaching completion of its first AI Act. Why have her Government once again been caught napping on introducing regulation?
To repeat the comments I made earlier, AI is not a devolved matter, and the people of Scotland were represented by the UK Government—by me and also by the Prime Minister of the UK. In relation to her urging us to do a copycat of EU legislation, may I point out that it was our White Paper that was praised for its innovation and its agility? It has allowed us to attract some of the leading AI companies to set up their first international offices here in the UK, creating the jobs not only of today, but of tomorrow.
I thank the Secretary of State for her statement.
(1 year, 1 month ago)
Written StatementsI am pleased to provide the House with an update on developments in the UK’s Government’s artificial intelligence policy in recent months.
AI promises to revolutionise our economy, society and everyday lives, bringing with it enormous opportunities but also significant new risks. Led by the Department for Science, Innovation and Technology, the UK has established itself as a world leader in driving responsible, safe AI innovation and has committed to host the first major international summit of its kind on the safe use of AI, to be held at Bletchley Park on 1 and 2 November 2023.
AI Safety Summit
The AI safety summit will bring together key countries, as well as leading technology organisations, academia and civil society to inform rapid national and international action at the frontier of AI development. The summit will focus on risks created or significantly exacerbated by the most powerful AI systems. For example, the proliferation of access to information that could undermine biosecurity. In turn, the summit will also consider how safe frontier AI can be used for public good and to improve people’s lives—from lifesaving medical technology to safer transport. It will build on important initiatives already being taken forward in other international fora, including at the UN, OECD, G7 and G20, by agreeing practical next steps to address risks from frontier AI.
On 4 September, the Government launched the start of formal pre-summit engagement with countries and a number of frontier AI organisations. As part of an iterative and consultative process, the Government published the five objectives that will be progressed. These build upon initial stakeholder consultation and evidence gathering, and will frame the discussion up to and at the summit:
a shared understanding of the risks posed by frontier AI and the need for action;
a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
appropriate measures that individual organisations should take to increase frontier AI safety;
areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
showcase how ensuring the safe development of AI will enable AI to be used for good globally.
I look forward to keeping Parliament updated as plans for the summit progress.
Frontier AI Taskforce
Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly. Earlier this year, the Government announced £100 million to set up an expert taskforce to help the UK adopt the next generation of safe AI—the first of its kind.
On 7 September, we renamed the taskforce—formerly the Foundation Model Taskforce—the Frontier AI Taskforce, explicitly acknowledging its role in evaluating risk at the frontier of AI, and systems which could pose significant risks to public safety and global security.
Since the taskforce’s chair, Ian Hogarth, was appointed 12 weeks ago, the taskforce has made rapid progress, recruiting its external advisory board, research teams and developing partnerships with leading frontier AI organisations, to help develop innovative approaches to addressing the risks of AI and harnessing its benefits. I am pleased to be welcoming seven leading advisers to guide and shape the taskforce’s work through its external advisory board. This includes: the Turing prize laureate Yoshua Bengio; the GCHQ Director, Anne Keast-Butler; the Deputy National Security Adviser, Matt Collins; the Chief Scientific Adviser for National Security, Alex Van Someren; the former Chair of the Academy of Medical Royal Colleges, Dame Helen Stokes-Lampard; the Alignment Research Centre researcher Paul Christiano; and the Prime Minister’s representative for the AI safety summit, Matt Clifford, who will join as vice-chair to unite the taskforce’s work with preparations for the summit—all of whom will turbo charge the taskforce’s work by offering expert insight.
We are also drawing on experts to build a world-leading research team. Oxford researcher Yarin Gal has been confirmed as the first taskforce research director. Cambridge researcher David Kreuger will also be working with the taskforce as it scopes its research programme in the run-up the summit. The research team will sit alongside a dedicated team of civil servants—overseen by a senior responsible officer in my Department, reporting into the DSIT permanent secretary as accounting officer. Together, these teams will work to develop sophisticated safety research capabilities for the UK, strengthen UK AI capability and deliver public sector use cases in frontier AI models.
Industry collaboration, including internationally, forms the backbone of UK’s approach to shared AI safety and the work of the taskforce will be no different. The taskforce is harnessing established industry expertise through partnerships with leading AI companies and non-profits, a number of which were outlined in our recent announcement. These partnerships will unlock advice on the national security implications of frontier AI, as well as broader support in assessing the major societal risks posed by AI systems.
AI Regulation
We are moving quickly to establish the right guardrails for AI to drive responsible, safe innovation. In March, we published the AI regulation White Paper, which set out our first steps towards establishing a regulatory framework for AI. We proposed five principles to govern AI, and committed to establishing mechanisms to monitor AI risk, and co-ordinate, evaluate and adapt the regulatory framework as this technology evolves. We received responses from over 400 individuals and organisations across regulators, industry, academia and civil society. We will be publishing our response to the consultation later this year, to ensure we can take into account the outcomes of the AI safety summit in November.
Since publishing the White Paper, we have taken rapid steps to implement our regulatory approach. I am pleased to confirm that my Department has now established a central AI risk function, which will identify, measure and monitor existing and emerging AI risks using expertise from across Government, industry and academia, including the taskforce. It will allow us to monitor risks holistically as well as to identify any potential gaps in our approach.
We committed to an iterative approach that will evolve as new risks or regulatory gaps emerge. We note the growing concern around the risks to safety posed by our increasing use of AI, particularly the advanced capabilities of frontier AI and foundation models. Our work through the taskforce offers vital insights into the issue and we will be convening nations to examine these particular risks at the international level. We will be providing a wider update on our regulatory approach through our response to the AI regulation White Paper later this year.
Alongside this, we are working closely with regulators. Many have started to proactively and independently take action in line with our proposed AI framework, including the Competition and Markets Authority, which yesterday published a report on its initial review of AI foundation models; the Medicines and Healthcare products Regulatory Agency, which has published a road map for software and AI as a medical device; and the Office for Nuclear Regulation, which is piloting an independent sandbox for the use of AI in the nuclear sector, with support from the regulators’ pioneer fund. This demonstrates how our expert UK regulators are taking innovative, world-leading approaches to ensuring AI safety and effectiveness.
We are also examining ways to improve co-ordination and clarity across the regulatory landscape. This includes our work with the Digital Regulation Cooperation Forum (DRCF) to pilot a multi-regulator advisory service for AI and digital innovators, which will be known as the DRCF AI and digital hub. This will provide tailored support to innovators to navigate the AI and wider digital regulatory landscape and capture important insights to support the design and delivery of our AI regulatory framework.
[HCWS1054]
(1 year, 2 months ago)
Commons ChamberThis is a momentous day for British science and technology as we have negotiated a great landmark deal, designed in the UK’s best interest. A hard-fought-for deal that will allow the UK’s world-leading scientists, researchers and businesses to participate with total confidence in both Horizon Europe and Copernicus, it gives the best and brightest of the UK’s scientific community access to the world’s largest research collaboration programme.
It means British scientists and businesses can co-operate with researchers not just in the EU, but in Norway, New Zealand and Israel, expanding the reach and impact of British science and technology to every corner of the globe. With Korea and Canada looking to join these programmes in the future, we are opening the doors to further pioneering, international collaboration with a growing group of countries.
We were always clear that we wanted to associate with Horizon and that is why we had it in the trade and co-operation agreement. However, as hon. Members know only too well, we were not able to commence those negotiations over the last two years because the European Union had linked it to the Northern Ireland protocol. However, our Prime Minister’s Windsor framework broke the deadlock and allowed us to commence negotiations.
We said all along that we would accept only a good deal, which is why we did not take the first deal on the table. Instead, we pursued a bespoke agreement that delivers for British taxpayers, researchers and businesses. We will not pay for a second of the time in which we were not members of the programme, and our deal protects and benefits hardworking taxpayers through a new clawback mechanism.
What is more, our scientists and researchers can benefit from Horizon today, meaning they can immediately bid into the programme, with certainty over funding. All calls in the 2024 work programme, including those that open for bids this year, will be funded through our association to Horizon, while the few remaining 2023 work programme calls will be funded by the UK guarantee.
But this is not just about Horizon. We needed a bespoke deal that gave us access only to EU programmes that would benefit the UK, not to those that would not. Listening to voices from our world-leading fusion sector, we will not be joining Euratom. Instead, we are investing an additional £650 million straight into our cutting-edge fusion sector, assisting our journey to becoming a science and technology superpower by 2030.
When I first started in this role, I made it my No.1 priority to listen to the voices and views of the scientific and tech communities. What I heard loud and clear was how essential associating to Horizon Europe was for the sector, and I am delighted that this Government have now delivered on that. The deal we have negotiated has been warmly welcomed by the whole of the scientific community. It gives it the certainty it needs to continue delivering long-term research and innovation, and it will enable it to change people’s lives and have a truly global outlook. Members do not need to take my word for it; today’s announcement has been supported by Universities UK, the Russell Group, all four of our prestigious national academies, leading tech businesses, including Airbus and Rolls Royce, and countless more.
The deal is not just about funding and support for universities, businesses and scientists. It is a deal that has a real-world impact for people and communities throughout the UK. This deal is set to create and support thousands of new jobs as part of a new generation of research talent who are attracted to the UK and work across the globe. The deal we have negotiated will allow the UK to continue to play a leading role on the international stage in solving the biggest challenges that we face, from climate change and the race to net zero to cures for cancer, dementia and other life-threatening diseases.
Alongside this deal, the Government are proudly backing our science and tech communities. We have committed to invest £20 billion in research and development by the next financial year. That means more record funding on wider priorities, from harnessing the power of AI to improving our public services to tapping the potential of quantum computing. We will continue to strengthen our collaboration with countries beyond Europe, building on the success of the international science partnership fund we launched earlier this year, to deliver our truly global science approach with global benefits.
Today we take another giant leap forward in our mission to make Britain a science and tech superpower. I am confident that scientists and businesses are ready to seize the moment. The horizon could not really be brighter for British science and technology. I commend the statement to the House.
I call the shadow Secretary of State.
In a past life, I was a university lecturer. I have to say that, if one of my students had turned up to hand in an assessment two years late, I would not have been terribly amused. I do not think anyone could be very amused by the two wasted years here. On science policy in this country, we have a classic case of lions being led by donkeys.
Britain is blessed with many of the world’s greatest innovators: the developers of the covid vaccine and the internet, cancer specialists and green energy pioneers. We are home to those who are at the vanguard of research, yet they have been failed by this Conservative Government time and again. They have left our researchers locked out of the world’s leading scientific collaboration project, worth over £80 billion, for the past two years. It has been like keeping Lionel Messi or England’s Lucy Bronze out of the World cup.
We have already seen reports of cancer research specialists leaving the UK to pick up Horizon projects elsewhere, while we have lost two years of funding rounds. That vital ground has been lost and cannot be revived, despite a promise in the Conservatives’ 2019 manifesto. That is what happens when bluster and division are put above delivering for people.
The Secretary of State spoke about the link with the Windsor framework. It was this Conservative Government who negotiated the Northern Ireland protocol in the first place and it is little wonder this Government have presided over such anaemic economic growth. So although the long-delayed confirmation of association to Horizon and Copernicus will be a relief, it cannot undo the damage that has already been caused and leaves serious questions for the Government to answer.
In her statement, the Secretary of State spoke about some of the costs, but can she set out the precise quantum of the financial contribution to Horizon and the other schemes in the years ahead? Has any financial disadvantage been accrued through missing out on the first years of the scheme? Could she confirm how the UK’s position as an associate member of Horizon impacts our ability to strategically shape the future of the Horizon programme? How do we ensure terms that are advantageous for our research communities?
I heard the Minister for Science, Research and Innovation, the hon. Member for Mid Norfolk (George Freeman), chuntering earlier. He will probably recall his contribution to this debate:
“Of the three—Euratom, Copernicus and Horizon—Euratom is probably the hardest of all to reproduce…I still think of them very much as a bundle. We would like to remain in all three, but, if I had to pick one, Euratom is the one”.
Those were his words, but the agreement does not include association to Euratom. Can the Secretary of State outline what risks that might impose for international collaboration and energy security?
In short, today’s announcement is long overdue. It leaves vital questions outstanding. What I have no doubt about is that our brilliant scientific community can rise to the challenges and make the best of the hand that they have been dealt. I have no doubt either, I am afraid, that we cannot go on being held back by this chaotic Conservative Government who are a drag anchor on so much that makes Britain great.
I welcome the right hon. Member to his position. I am delighted that the Opposition have finally got round to appointing a ministerial team to shadow the Department for Science, Innovation and Technology—it took them six months, but they did get there in the end.
I am also delighted that the right hon. Member has acknowledged the significance of this Government deal, but to address his point about the delay, he knows only too well that it was the European Union that linked Horizon association directly with the Northern Ireland protocol and it is this Government and this Prime Minister who managed to unlock that with the Windsor framework. It is also this Government who bridged that gap with the Horizon guarantee, spending more than £1 billion.
As soon as the framework was agreed, I was the first to hop on the train to Brussels to see the commissioner to ensure that we could kickstart that negotiation. At the time, I was eight and a half months pregnant, but I thought that that was vital to our sector and I am glad that we are able to deliver today. One thing I will not do is apologise for the Government wanting to get a good deal. Let us remember it was the Opposition who called for us to accept the deal on the table back in March. If we had done that, we would not have this good deal for our taxpayers, our businesses, our scientists and our researchers. I have already—it was in the statement— clarified the point that we will not pay for one moment that we were not associated with Horizon, but I reiterate that point.
To answer some of the right hon. Member’s other questions, the cost will be £2 billion a year and, as I have said, we are injecting £650 million directly into our fusion sector. On Euratom, the Minister of State for Science, Research and Innovation agrees with me that it is the right strategy to proceed with Horizon and Copernicus, but not with Euratom. It is not just we who believe that. The Fusion Industry Association has welcomed the UK Government’s ambitious package of £650 million. Ian Chapman has said that he welcomes the clarity over our future relationship. In fact the association made representations directly to us in order to ensure that we put the money directly into our sector.
This is a great deal for Britain, for the taxpayer, for businesses, for scientists and for researchers. We believe that our country has the potential to be a science and tech superpower. It is a shame that the Opposition do not.
I call the Chair of the Science, Innovation and Technology Committee.
Science does not recognise borders, and everyone wins when the best UK scientists can work with the best in the EU and around the world, so this is a huge and positive announcement and has been greeted with delight and relief not just by the science community in the UK, but across Europe and beyond.
My Select Committee, the members of which are in the Chamber, will examine the deal in detail, but may I congratulate the Secretary of State, her Minister and the whole of the Government on what seems to be a shrewd agreement that, for example, allows us to win grants even beyond our own financial contribution? Will she confirm that Horizon funding is available not just to academic institutions, but for innovation by British industry? Has she consulted formally the UK Atomic Energy Authority, which runs our fusion programme, about not participating in Euratom and, if so, what is its view? Does she agree that, with the reputation of British science as high as it is, with the science budget doubling as it has over the past 10 years to £20 billion a year by next year, and with now the opportunities of rejoining Horizon opening up, this is a golden opportunity for the UK to advance our status as a science superpower?
I could not agree more with my right hon. Friend, the Chair of the Select Committee. I am delighted that he has welcomed this announcement today. In relation to his comments on Euratom, we did consult widely the sector and the UK AEA, which has welcomed this publicly, along with many stakeholders, including the business community, which will also benefit from this announcement today.
The SNP welcomes this move, which will provide much-needed certainty and kickstart new research opportunities for key strength areas of the Scottish economy, including life sciences. The Prime Minister himself has said that rejoining this EU scheme is
“critical to a brighter economic future”.
But the SNP believes that rejoining the EU as a full member state is much more critical than that. Unfortunately, I know that this Government, and probably the incoming Labour Government, strongly disagree with that, to the detriment of Scotland.
Securing Horizon association is a matter of pressing importance. We must not forget that universities and members of the research community in Scotland have missed out on their share of the all-important funding provided by the €95.5 billion European research and innovation programme since the UK Government’s decision to pursue a hard Brexit.
We are disappointed that Euratom is not going to be pursued and is being taken separately. Although we welcome the funding, I think we all agree that it is much better that we work in conjunction with our European neighbours. Scotland has also been locked out of Copernicus, so what is the status of re-entering that and, indeed, the Erasmus+ scheme?
We will not be rejoining the EU under this Government because we believe in democracy. On Euratom, the best people to listen to are the sector themselves, who told us directly and clearly that they would be better off with the money going straight to them and that is what we have done. We have listened to the sectors involved and we have delivered. This is a fantastic deal that creates many opportunities for businesses, scientists and researchers. It is not to be confused with Erasmus, which the hon. Member raised. That is a separate scheme. In fact, it was this Government, and I personally when I worked in the Department for Education, who established the Turing scheme, which is better than Erasmus because it is global in nature and supports those from different backgrounds.
Perhaps I could declare an interest: in my previous job, I was one of the six European Parliament rapporteurs involved in setting up the initial Horizon 2020 project and the only one from the United Kingdom. What I learnt during the five years that I worked on that project was that this is not just for Nobel prize winners or mega companies; it is also for researchers at the start of their careers, for innovators and for less well-known companies such as Teledyne e2v in Chelmsford, which provides our eyes and ears to world space programmes.
What I heard time and again was that, if we create an opportunity for scientists plus researchers and combine that with the ability to work across borders and across disciplines, we will have a formula that will often result in better innovation and more effective solutions to some of the world’s trickiest problems. May I thank the Prime Minister, his ministerial team and all those on the EU side—for there were many—who continued to press to have British science in these programmes? It is a great deal for Britain, for all of us on the continent of Europe and for all of us who live in this world.
My right hon. Friend speaks a lot of sense. I thank her for her thanks, and for those to the Prime Minister and the negotiating team, who have done us proud in bringing home a deal that will truly deliver. I know that this is something that she has worked on considerably in her time and is passionate about.
I am very relieved by today’s statement, hopefully just in the nick of time to avoid really serious damage to UK science. I welcome it, and particularly applaud the contribution made by the Minister for Science, which I know has been very important to achieving this outcome. We are gaining associate membership of Horizon. To what extent does that give us a seat at the table to influence the future shape of the programme?
We will be able to lead projects from 2024. Most of the projects open at the moment— 80% to 90%—are for 2024, and we have the opportunity to lead them, so we can be at the forefront of this agenda.
I warmly welcome the statement by my right hon. Friend and constituency neighbour. It will be warmly welcomed by the scientific community across the United Kingdom. It might even be described as one small step for her but a giant leap for British science. Will she comment particularly on Arctic science, as 78 universities or other institutions are looking into matters in the Arctic at this moment? They will warmly welcome the rejoining of Horizon, but I want to hear from her a particular commitment by the British Government to further support British science in the Arctic. It is such an important area with regard to climate change and other things, and I want to hear that she and her hon. Friend, the Minister for Science, Research and Innovation, who does great work on these matters, is fully committed to supporting British science in the Arctic.
I know that my hon. Friend and constituency neighbour does a great deal of work on this as Chair of the Environmental Audit Committee, with a keen interest in this area. We have a fantastic track record when it comes to Arctic science, being fourth in the world, and we want to climb up that league table. Membership of Horizon Europe will certainly help us to achieve that.
The announcement that the UK will rejoin Horizon is very welcome, and I am very pleased about it, but there is so much more to be done to restore academic co-operation with the EU, especially for students. The Turing scheme is currently on a very sad par with the Erasmus programme. The University of Bath, as the right hon. Member will know, is a science university. As University of Bath students point out, the Turing scheme requires universities to forecast where students will go before their bid for funding, a year in advance. It restricts the freedom of students and creates a major challenge for universities. Will she work with the students’ union at the University of Bath to ensure that Turing will work as smoothly as Erasmus?
I am more than happy to work with the Department for Education and co-ordinate a conversation with the University of Bath, but it is important to note that the Turing scheme is different from Erasmus; it is better. It is global in nature. It is also more inclusive. The statistics on the Erasmus scheme show that it particularly helped children of families from middle-class backgrounds, whereas the Turing scheme is much more accessible.
I congratulate the Secretary of State and her Science Minister on a fantastic deal, which the scaremongers said could never happen. They said that we had to take what we were offered. We did not, and we have an excellent deal. Later today, I hope that the Science Minister will respond to a debate that I will lead on fibrodysplasia ossificans progressiva, a terrible genetic condition where muscle turns to bone, restricting the life chances of so many people, including some of my constituents who are here. There was a Horizon project looking into this, but it was suspended because of covid. Will we be allowed to go back into that, even though we have had this period of time out, or is that something that the Science Minister would like to write to me about?
The Science Minister has agreed to meet with our right hon. Friend and discuss this at great length.
I thank the Prime Minister for his timely and positive action following my question to him yesterday. I welcome the news, which comes as a huge relief to the research community after so much prolonged uncertainty. Can the Secretary of State assure the chief executive of Cancer Research UK, who welcomes the Government’s decision and has expressed the hope that this deal will pave the way for continued UK participation in future European research programmes?
The hon. Member’s question to the Prime Minister was indeed timely. I can give her that assurance. One of the key missions of Horizon Europe focuses on tackling and addressing cancer, and that will continue to be key.
I congratulate my right hon. Friend on today’s announcement that the UK will be rejoining Horizon. Scientific research opportunities are vital to the development of our future industries. However, young people in rural parts of the country, such as North Devon, often do not see the possibilities of a career in science. Will she work with Cabinet colleagues to ensure that the opportunities from today’s announcement extend right into our remote rural communities?
It is vital that the opportunities that young people have are not capped by the location where they live or are born, and that is certainly a key part of our levelling-up agenda. When it comes to rural communities, agri-tech is absolutely at the heart of the areas that we are focusing on. Horizon Europe will open up those potential collaborations across the globe.
It is a relief that the EU has relaxed its rather self-defeating ban on the participation of British scientists and researchers in projects with their European peers. I congratulate the Government on holding fast to achieve a deal that was in the best interests of British taxpayers. I will probe the Secretary of State on three specific points. She talked about the opportunity for the UK to lead projects, but she did not say that the UK will have the right to determine the focus of projects in the future. I am interested in that specific point.
Secondly, it is always important to have all the brains in the building working on projects. Will the UK have the opportunity post 2027 to see the Horizon programme expand beyond its existing members? Thirdly, will she reassure the House that the Government’s commitment, which they have shown year on year, to increase research and development spending will continue to focus on the competitive interests of the UK first and foremost?
We are confident that we will be able to use the Horizon programme to collaborate on areas of shared interest, including on strategically sensitive technologies, such as chips and semiconductors. Given the deal that we have agreed, we will be able to play a leading role within the Horizon agenda, and help to guide it through and expand it. Future association would, of course, be for future Governments to determine, but I am confident that our scientific community will seize this opportunity, utilise it, and prove how valuable membership of Horizon Europe is. I would be delighted to meet my hon. Friend to discuss his interest in these topics.
(1 year, 2 months ago)
Commons ChamberOn a point of order, Madam Deputy Speaker. A few moments ago the Secretary of State inadvertently promoted me to the position of Chair of the Environmental Audit Committee —our right hon. Friend the Member for Ludlow (Philip Dunne) would have been surprised and disappointed to hear that. I am in fact Chair of the Environmental Audit Sub-Committee on Polar Research, which is looking into the Arctic. I wonder whether my right hon. Friend might like to correct the record.
Further to that point of order, Madam Deputy Speaker. I am sure that it is only a matter of time, but I correct the record.
I thank the hon. Gentleman and the Secretary of State for between them correcting the record.
(1 year, 2 months ago)
Written StatementsThe Government have successfully concluded negotiations with the European Union regarding the UK’s participation in EU science and research programmes; Horizon Europe and Copernicus.
From today, UK scientists can bid and participate confidently in the world’s largest programme of research co-operation—alongside their EU, Norwegian, New Zealand and Israeli colleagues—and with countries like Korea and Canada looking to join.
UK academics and industry will be able to bid, secure funding for, and, crucially, lead, the vast majority of new calls that will be opening throughout the autumn. UK researchers and businesses can be certain that all successful UK applicants will be covered through the UK’s association for the rest of the programme, or through the remainder of the UK’s Horizon Europe guarantee scheme as we transition to these new arrangements. All calls in Work Programme 2024 will be covered by association and the UK guarantee scheme will be extended to cover all calls under Work Programme 2023. UK scientists and researchers can lead project consortia under Work Programme 2024—a key ask of the sector—allowing them to shape the next generation of international collaboration.
Under the previous programme the UK established over 200,000 collaborative links, and we will now play a leading role in a range of ground-breaking industry collaborations such as the Al, Data and Robotics Partnership worth over £2 billion, or the Cancer Mission aiming to help more than 3 million people by 2030.
Access to Horizon Europe was a top ask of our research community. We have listened to our sector and in this deal delivered collaboration where it is most valuable to UK science. This provides our scientists with a stable base for international collaboration and makes sure we are on track to deliver on the ambition to make the UK a science and technology superpower by 2030.
The Government have negotiated a bespoke deal in the UK’s national interest. It strengthens UK science, boosts economic growth and delivers for the UK taxpayer. This bespoke deal works for the UK by ensuring that we do not pay for the time we were not associated. It also delivers a new mechanism protecting our taxpayers in case the UK ends up putting significantly more into the pot than our scientists get out. This deal also means that the UK has a greater ability to overperform than other associated countries outside the EU/ EEA reflecting our confidence in UK science.
We will also associate to the Copernicus programme, a state-of-the art capacity to monitor the Earth, and to its services. The UK’s association to Copernicus comes at a crucial moment, where the Copernicus space infrastructure and its information services will evolve further and our contribution to understanding and acting on environmental and climate change related challenges is more important than ever. Access to this unique Earth observation data will provide early warning of floods and fires and allow the UK’s world-leading sector to bid for contracts worth over hundreds of millions. And the UK will have cost-free access to the EU Space Surveillance and Tracking services, providing important information about objects in space.
The UK will not join the Euratom programme. The UK fusion sector has communicated a preference for an alternatives programme that would involve direct investment in the UK sector. We are pleased to announce that we will be doing exactly that. We plan to invest up to £650 million to 2027 in a programme of new, cutting-edge alternative programmes subject to business cases, and will announce further details shortly.
[HCWS1006]
(1 year, 6 months ago)
Written StatementsI am repeating the following written ministerial statement made today in the other place by the Under-Secretary of State for Culture, Media and Sport, my noble Friend Lord Parkinson of Whitley Bay:
Following commitments made in the House of Commons, His Majesty’s Government has tabled a number of amendments to the Online Safety Bill. These will improve the regulatory framework by strengthening protections for internet users, particularly children, reflecting the Bill’s primary objective of keeping children safe online.
Senior management liability
These amendments will strengthen the accountability of online services by making providers and senior managers criminally liable for failures to comply with steps set out in a confirmation decision, when those steps relate to specific child safety duties. As promised in the House of Commons, we based our approach on provisions in the Irish Online Safety and Media Regulation Act 2022, which introduced individual criminal liability for failure to comply with a notice to end contravention. The offence will be punishable with up to two years’ imprisonment. In conjunction with the existing clause 178, liability of corporate officers for offences, this fulfils the commitment made in the House of Commons to create a new offence that captures instances where senior managers, or those purporting to act in that capacity, have consented to or connived in ignoring enforceable requirements, risking serious harm to children.
I would like to thank my hon. Friends the Members for Stone (Sir William Cash) and for Penistone and Stocksbridge (Miriam Cates) for all of their hard work and dedication in this area. The tabled amendment will provide the legal certainty needed for the offence to act as an effective deterrent, and to be prosecuted effectively.
Recognised news publisher content—“taking action”
This amendment has been tabled to clarify that category 1 services need to notify recognised news publishers and offer a right of appeal before action is taken against their content for a suspected breach of terms of service, and not in relation to routine or personalised content curation. This amendment will also ensure that platforms are not prevented from displaying warning labels on content encountered by children.
Duty to publish a summary of illegal and child safety risk assessments
These amendments will require the providers of the largest services to publish summaries of their risk assessments for illegal content and content that is harmful to children. These platforms must also supply Ofcom with records of those risk assessments. These amendments will increase the level of transparency regarding these platforms’ approaches to safety, and the risk of harm on their services. This will empower parents and other internet users to make informed decisions when choosing whether and how to use them.
Statutory consultees: victims’, domestic abuse, and children’s commissioners
These amendments to the Bill name the victims’, domestic abuse and children’s commissioners as statutory consultees for Ofcom. Ofcom will be required to consult each Commissioner in the course of preparing a draft code. This will ensure that the voices of children and victims of abuse—including victims of violence against women and girls—are properly considered during implementation of the framework.
Priority offences
These amendments seek to add priority offences to strengthen the Bill’s illegal content duties. Providers will be required proactively to tackle content and activity amounting to these offences.
First, we are seeking to add the controlling or coercive behaviour offence. This will add to the existing protections in the Bill for women and girls, to ensure providers design and operate their services to protect women and girls from this behaviour when it occurs on their platforms.
Secondly—and with thanks to my hon. Friend the Member for Dover (Mrs Elphicke) and my right hon. Friend the Member for South Holland and The Deepings (Sir John Hayes) for raising this important issue—we are adding new offences relating to illegal immigration and modern slavery, to ensure that the Bill does more to prevent services being used to facilitate these crimes.
The Government are also tabling a technical amendment to add the foreign interference offence being introduced by the National Security Bill to the list of priority offences in schedule 7. This amendment will ensure that the Online Safety Bill requires social media firms to identify and root out state-backed disinformation. This provision was originally included in the National Security Bill, but as that is likely to receive Royal Assent before the Online Safety Bill the provision will instead be included in the Online Safety Bill to ensure clarity of legislation.
Recognised news publisher definitions (sanctioned entities)
This amendment will ensure that any entity that is designated for the purposes of sanctions regulations does not qualify as a “recognised news publisher” under the Bill, and therefore will not benefit from the protections reserved for such publishers.
The Government are also tabling a number of technical amendments to the Bill. These amendments will resolve technical drafting issues, provide further legal clarity for business, and ensure that the Bill is as effective as possible. These include:
Communications offences
This amendment extends the false and threatening communications offences, which currently apply only to England and Wales, to Northern Ireland. In the absence of an Executive in Northern Ireland, the process for securing legislative consent for this extension cannot be commenced.
The Department for Science, Innovation and Technology (DSIT) is in regular contact with the Northern Ireland civil service, who are content that the Department proceed without the approval of the Executive. Following engagement with the UK Government, the Scottish Government have decided not to introduce these offences at this time.
Permissive extent
This amendment introduces a permissive extent clause that will allow the Bailiwick of Guernsey and the Isle of Man to extend the provisions of the Bill to Guernsey or the Isle of Man in the future.
Funding changes
This amendment comprises small, technical changes to the Bill to facilitate the structure of funding for the regime, with fees expected to be charged from the financial year 2025-26 or later. As previously announced, Ofcom will be expected to recover the initial costs of setting up the regulatory regime and meet their ongoing costs by charging fees to regulated services with revenue at or above a set threshold.
Proactive technology
This amendment clarifies that Ofcom can only recommend or require the use of content moderation technology for the illegal content, children’s safety, and fraudulent advertising duties. This is in line with existing policy to ensure that there are strong safeguards for freedom of expression and privacy. This does not affect the tech-neutral nature of the Bill, and Ofcom will be able to recommend a range of technologies that companies can use to fulfil their duties.
The amendments detailed in this statement will ensure that the Online Safety Bill presents the right balance in its provisions for the safety of children and adults online, while ensuring that the regime remains proportionate and future-proof.
[HCWS726]