Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what discussions she has had with Ofcom on its classification system for AI chatbots; and whether her Department plans to review the classification of chatbot services as search services.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
Last year, Ofcom published a letter that set out that if an AI service searches the live internet to return its results it will be regulated under the Act as a search service.
The Secretary of State has confirmed in Parliament that the government will further consider the role of chatbots and how they interact with the Online Safety Act, and has urged Ofcom to use its existing powers to ensure they are safe for children.
Where evidence demonstrates that further action is necessary to protect children and the wider public, we will not hesitate to act.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to lead international efforts to establish agreed standards for AI safety and ethics in fraud prevention; and what assessment she has made of the potential impact of the UK's on shaping global AI policies to combat scam operations.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The UK is leading international efforts to raise AI safety standards. Through the AI Security Institute we are building world-first public capabilities to test advanced AI systems and share methodologies internationally. We also work with our international partners across several multilateral organisations and standard bodies, including the G7, G20, UN, OECD, and GPAI to address a range of AI related issues.
Domestically, the Online Safety Act requires major platforms and search services to assess and mitigate fraud risks, including those amplified by AI, and take swift action to remove scam content on their platforms.
In addition, the Home Office will continue to ensure that Law Enforcement have the capabilities they need to tackle perpetrators who exploit the use of AI, while working closely with international partners and in partnership with the tech industry to build resilience and protect UK public and businesses.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the level of risk to UK competitiveness from underinvestment in (a) AI and (b) defence technology; and what steps she is taking to ensure that the UK does not fall behind international competitors in AI development and deployment.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
In January, we accepted all 50 recommendations of the AI Opportunities Action Plan, setting out the steps we are taking to ensure the UK does not fall behind the advances in AI made by global competitors but rather is an AI maker, not an AI taker.
At the Spending Review, we committed up to £2 billion to deliver this plan, and are now 11 months into delivery. We are investing in the foundations of AI through world-class computing and data infrastructure, for example increasing public compute by 20x by 2030 through the expansion of the AI Research Resource programme, and through the announcement of 4 AI Growth Zone sites since January this year. We will also combine equity investment with other levers to back British businesses to become national champions in critical domains through the £500 million-backed Sovereign AI Unit.
DSIT is also working with the MoD to foster a world-leading UK defence technology sector through establishing the UK Defence Innovation (UKDI) Organisation and collaborating on National Security Strategic Investment Fund (NSSIF) investment programmes.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what support her Department is providing to enable local authorities to commission AI skills training for SMEs and community groups in their areas.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The AI Opportunities Action Plan sets out how we can strengthen our AI skills and talent base to ensure AI can be used by workers and the public across the UK. We are providing targeted support to SMEs, training 7.5 million workers with essential AI skills by 2030 and trialing AI traineeships at the National Innovation Centre for Data (NICD) in Newcastle, helping new UK AI graduates to develop industry-ready skill sets by working on real-world projects through industry placements.
We are also providing £5m for each AI Growth Zone (AIGZ) to support skills and adoption in the area and we are also ensuring that local authorities keep 100% of all business rates generated by sites where pre-existing arrangements do not exist.
We are targeting our funding to where it is most impactful and continue to forge strong partnerships with industry and local government to deliver these initiatives.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps the Government is taking to showcase departmental AI pilots, including (a) which tools are being used, (b) what safeguards are in place, and (c) what has succeeded or failed; and whether she will publish accessible case studies to provide templates for responsible AI adoption by SMEs, charities, and public sector organisations.
Answered by Ian Murray - Minister of State (Department for Science, Innovation and Technology)
The government is promoting departmental pilots through the PM’s Exemplars Programme, which have been established to learn from high potential AI pilots in areas such as health, education and planning, and share learnings of what works or not. AI tools used in the public sector are also promoted via the public AI Knowledge Hub – a centralised repository of use cases, guidance and prompts - and through an AI Community of Practice available to all public sector workers.
All AI projects across Government are safeguarded by access to DSIT’s suite of responsible AI guidance, tools and expertise which enable rapid innovation whilst ensuring a transparent, trustworthy and responsible approach.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to support SMEs to (a) implement cybersecurity measures and (b) procure AI systems securely; and whether she will make an assessment of the potential merits of providing (i) subsidised support and (ii) guidance to tackle the cost pressures that prevent small businesses from adopting secure-by-design practices.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
Improving the cyber security of our nation’s SMEs is critical to the resilience of the wider economy. The Government provides free tools, guidance, and training to help SMEs implement cyber security measures. This includes the National Cyber Security Centre (NCSC’s) recently launched Cyber Action Toolkit which provides SMEs with tailored advice.
The Department for Science, Innovation & Technology (DSIT) and the NCSC, have introduced several voluntary Codes of Practice, covering Software, AI, and apps and app stores. These measures, co-designed with industry and experts, set minimum security requirements and support SMEs to securely adopt AI systems.
We will continue to work with industry and monitor the impact of these Codes of Practice. This will enable us to assess their effectiveness and consider further guidance and incentives to help SMEs confidently implement secure-by-design practices in a cost-efficient way. For immediate assistance, SMEs should get in touch with their regional Cyber Resilience Centre, which are run by the police and the Home Office, and offer free cyber advice and support to SMEs.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what transparency conditions are currently required when government departments procure AI systems from private companies; and what mechanisms are in place to ensure public sector bodies can explain AI-driven decisions to citizens when the underlying models are proprietary.
Answered by Ian Murray - Minister of State (Department for Science, Innovation and Technology)
Since February 2024, all government departments and arm’s-length bodies must comply with the Algorithmic Transparency Recording Standard (ATRS), which mandates publishing details on algorithmic tools, including decision-making processes, human oversight, technical specifications, and risk assessments. Suppliers are required to provide sufficient information for transparency records, with exemptions balancing commercial sensitivities. Over 36 ATRS records have been published to date.
The AI Knowledge Hub further enhances transparency by sharing open-source code, problem statements, and performance metrics.
Additionally, the Open Source AI Fellowship promotes explainability through publicly inspectable models. These measures enable government to explain AI-driven decisions while maintaining accountability.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to establish standardised testing frameworks for identifying bias in AI datasets; and whether she will consider introducing requirements for the quality of databases used to train artificial intelligence systems.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
AI is already regulated in the UK. A range of existing rules already apply to AI systems, such as data protection, competition, equality legislation and sectoral regulation. The government is committed to supporting regulators to promote the responsible use of AI in their sectors, including identifying and addressing bias.
To help tackle this issue, we ran the Fairness Innovation Challenge (FIC) with Innovate UK, the Equality and Human Rights Council (EHRC), and the ICO. FIC supported the development of novel of solutions to address bias and discrimination in AI systems and supported the EHRC and ICO to shape their own broader regulatory guidance.
The government is committed to ensuring that the UK is prepared for the changes AI will bring.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the benefits of (a) a duty of candour requiring AI developers and deployers to publicly disclose when biases are discovered in their algorithms or training data and (b) providing clear mitigation strategies, similar to disclosure requirements in other regulated sectors such as medicines.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
A range of existing rules already apply to AI systems such as data protection, competition, equality legislation and sectoral regulation. The government is also committed to supporting regulators to promote the responsible use of AI in their sectors and mitigate AI-related challenges, such as identifying and addressing algorithmic bias.
To help tackle this issue, we ran the Fairness Innovation Challenge (FIC) with Innovate UK, the Equality and Human Rights Council (EHRC), and the ICO. FIC supported the development of novel of solutions to address bias and discrimination in AI systems and supported the EHRC and ICO to shape their own broader regulatory guidance.
This is complemented by the work of the AI Security Institute (AISI) who work in close collaboration with AI companies to assess model safeguards and suggest mitigations to risks pertaining to national security.
To date, AISI has tested over 30 models from leading AI companies, including OpenAI, Google DeepMind and Anthropic.
The government is committed to ensuring that the UK is prepared for the changes AI will bring and AISI’s research will continue to inform our approach.
Asked by: Victoria Collins (Liberal Democrat - Harpenden and Berkhamsted)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to ensure that safety-by-design principles are integrated into AI systems from inception rather than as retrospective additions especially given the persistence in harmful online content including deep-fake CSAMs that are visible across the internet.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The government is committed to tackling the atrocious harm of child sexual exploitation and abuse (CSEA). Making, distributing or possessing child sexual abuse material (CSAM) is a serious criminal offence, and the Online Safety Act requires services to proactively identify and remove such content.
The Act requires in-scope services, including AI services, to take a safety by design approach to tackling these harms. Ofcom has set out safety measures, including requiring risky services to use technology to detect known images and scan for links to such content. There are also measures to tackle online grooming.
We are taking further action in the Crime and Policing Bill to criminalise AI models which have been optimised to create CSAM and creating a new legal defence which will allow designated experts (such as AI developers and third sector organisations) to stringently test whether AI systems can generate CSAM, and develop safeguards to prevent it.
The government remains committed to taking further steps, if required, to ensure that the UK is prepared for the changes that AI will bring.