Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether the £20 billion research and development budget previously allocated to the former Department for Business, Energy and Industrial Strategy has been allocated to her Department.
Answered by Paul Scully
The commitment the Government made to spend £20bn on public R&D investment in 2024/25 remains in place and is a cornerstone of our plans to cement the UK’s place as a science and technology superpower.
At the Spending Review, HM Treasury allocated R&D funding across all Government Departments, of which the Department for Business, Energy and Industrial Strategy (BEIS) accounted for 71% in 2024/25.
The majority of BEIS R&D funding has been allocated to Department of Science Innovation and Technology, except for policy areas where responsibility sits with another Secretary of State. For example, the Net Zero Innovation Portfolio and R&D investment by the Nuclear Decommissioning Authority transfers to the Department for Energy and Net Zero.
Budget allocations for the new Departments will be confirmed in the upcoming Main Estimates 2023/24.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps she is taking to support (a) responsible research and development into and (b) the use of ethical practices on artificial intelligence by industry.
Answered by Paul Scully
The Government is taking a number of steps to support responsible research and promote ethical practices in the AI industry. The AI Regulation White Paper sets out 5 cross-sector principles which will guide and inform the responsible development of AI. In addition, this Government has allocated £2 million for the development of regulatory sandboxes for businesses to make it easier for them to navigate the regulatory landscape so they can bring innovative products to market whilst doing so in line with our principles
We will also continue to take a leading role in global standards development organisations, such as the ISO and IEC, to develop global AI technical standards that uphold our democratic values. In this context, the AI Standards Hub led by The Alan Turing Institute, in collaboration with the British Standards Institution and the National Physical Laboratory, and supported by the UK Government, will aim to grow the UK’s multi-stakeholder contribution to the development of global AI technical standards.
£8.5 million funding has been made available via the Arts and Humanities Research Council in June 2022 June for ‘Enabling a Responsible AI Ecosystem’, the first major academic research programme on AI ethics and regulation of this scale. This is complemented by the £117 million investment secured this year for new UKRI Centres for Doctoral Training – on top of £100 million for existing Centres funded in 2019 – which include ethics and social responsibility courses for the PhD candidates they train.
Announced in April this year, the Government is also establishing a Foundation Model Taskforce with £100 million start-up funding to ensure sovereign capabilities and broad adoption of safe and reliable foundation models. The Taskforce will focus on opportunities to establish the UK as a world leader in foundation models and their applications across the economy, and acting as a global standard bearer for AI safety.
Finally, Crown Commercial Service’s AI Marketplace dynamic purchase system for public sector procurement of AI, which operationalised the recommendations of the Office for AI’s procurement guidelines, has a baseline ethics standard for suppliers into government.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether her Department plans to take steps to help ensure that AI (a) algorithms and (b) systems are transparent, explainable, and free from bias or discrimination.
Answered by Paul Scully
Our recently published white paper sets out a framework for regulating AI, which is underpinned by five cross-sectoral principles which will inform the responsible development and use of AI. These principles include ‘Appropriate transparency and explainability’ and ‘Fairness’.
As set out in the white paper, our iterative framework aims to ensure that AI systems are appropriately transparent and explainable, allowing for individuals, regulators and organisations to have access to appropriate information about an AI system, as well as to be able to interpret and understand the decision-making processes behind them. Together, appropriate transparency and explainability will help to drive trust and understanding of AI systems.
The white paper is subject to public consultation open until 21 June 2023.
In terms of AI systems used in delivering public services, the Government was one of the first in the world to implement an algorithmic transparency standard for use in public service delivery, allowing public sector organisations to provide clear information about algorithmic tools they use to support decisions, including why they are using them. The Government’s AI procurement guidelines also recommend that systems being procured should undergo an equality impact assessment in order to ensure that AI meets the needs of the diverse society it serves.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to support international (a) cooperation and (b) coordination on artificial intelligence regulation on (i) tackling cross-border challenges and (ii) ensuring that ethical standards are upheld globally.
Answered by Paul Scully
The inherent cross-border nature of the digital ecosystem means it is imperative we work closely with partners. This is in order to prevent a fragmented global market, ensure interoperability and promote the responsible development of AI internationally.
Many businesses developing and using AI are operating across different jurisdictions, and we recognise the importance of working with global partners to develop a responsive and compatible system of global AI governance, allowing the UK and others to engage meaningfully on cross-border AI risks and opportunities. This will support our vision for a global ecosystem that promotes innovation and responsible development and use of technology, underpinned by our shared values of freedom, fairness, and democracy.
The UK AI Regulation White Paper, published in March 2023, recognises the critical role of international collaboration and coordination in AI governance, and prioritises laying the foundations for interoperability: ensuring AI systems can work together as required and that processes are complementary and robust. This will include the role of tools for trustworthy AI such as technical standards and assurance techniques to reduce technical barriers to trade and increase market access.
The UK is already playing a leading role in international discussions on AI ethics and potential regulations, such as work at the Council of Europe, the OECD, UNESCO, the Global Partnership on AI and the G7. The Government will continue to work with our partners around the world to shape international norms and standards relating to AI, including those developed by multilateral and multistakeholder bodies at global and regional level, and to promote the safe and responsible development of AI.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether she is taking steps with stakeholders from the artificial intelligence industry to establish (a) standards and (b) guidelines for the (i) safe and (ii) secure deployment of artificial intelligence technologies.
Answered by Paul Scully
The AI Regulation White Paper proposes a proportionate, collaborative approach to AI regulation.
The approach set out in the white paper ensures that the full range of tools to support effective governance including technical standards and assurance techniques to support the implementation of the UK’s approach. The Government is actively supporting the development of these tools. The Centre for Data Ethics and Innovation is building on the AI Assurance Roadmap to establish an AI Assurance ecosystem in the UK, and the UK AI Standards Hub champions the use of global technical standards. These initiatives include collaboration with industry to showcase how these tools can be applied to real-world use cases to align with the AI regulatory principles.
This collaborative approach is also reflected in the UK’s plans to accelerate the UK’s capability in artificial intelligence. The Foundation Model Taskforce will develop the safe and reliable use of this pivotal artificial intelligence (AI) across the economy, and ensure the UK is globally competitive in this strategic technology.
The UK will continue to take a leading role in international discussions on the responsible and ethical development of AI through multilateral forums such as the OECD, Global Partnership on AI (GPAI), and the G7 where we were particularly pleased to reach an agreement last week that recognised the need for close working on AI – with a focus on Generative AI.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, with reference to her Department's publication entitled A pro-innovation approach to AI regulation, published on 29 March 2023, what steps her Department plans to take to ensure regulatory frameworks keep pace with (a) emerging AI applications and (b) other technological advancements.
Answered by Paul Scully
The AI Regulation White Paper proposes a proportionate, collaborative approach to AI regulation, and aims to promote innovation while protecting the UK’s values. Our approach is designed to ensure the Government is able to adapt and respond to the risks and opportunities that emerge as the technology develops at pace.
The Government is also working with international partners to understand emerging technologies and AI trends, while promoting the UK’s values, including through key multilateral fora, such as the OECD, the G7, the Global Partnership on AI (GPAI), the Council of Europe, and UNESCO, and through bilateral relationships.
The AI regulation white paper proposes a range of new central functions, including a horizon scanning function intended to support the anticipation assessment of emerging risks. This will complement the existing work undertaken by regulators and other government departments to identify and address risks arising from AI.
As set out in the white paper, the Government will continue to convene a wide range of stakeholders - including frontier researchers from industry - to ensure that we hear the full spectrum of viewpoints.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, how many former staff from the Departments for (a) Business, Innovation and Skills and (b) Digital, Culture, Media and Sport were moved to her Department.
Answered by George Freeman
The Department of Science, Innovation and Technology is completing the transfer of around 935 staff from the former Department for Business, Energy and Industrial Strategy, and around 800 staff from the former Department of Digital, Culture, Media and Sport. The staff data is live and so this number may move slightly ahead of the legal transfer date which will be in mid-June. This does not include staff from BDUK, who will be transferring sponsorship to DSIT, but not employer.
Asked by: Matt Hancock (Conservative - West Suffolk)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department plans to take to support cross government research and development spending.
Answered by George Freeman
On March 6th 2023, the Government launched its Science and Technology Framework, setting the government’s Science and Technology agenda up to 2030. This framework, led by the Department for Science, Innovation and Technology, will challenge all of government to put the UK at the forefront of global science and technology and create a coordinated cross-government approach.
Optimising public and private sector investment in research and development (R&D) is one of the framework’s 10 strands. This includes the Government’s pledge to increase public investment in R&D to £20 billion by 2024/25 - the largest increase in public R&D budget over a Spending Review period.