Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to address reports of search engines that use artificial intelligence being manipulated to direct consumers to fraudulent customer service phone numbers.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
Frauds are increasingly sophisticated. The government is aware of reports that criminals are manipulating AI services to place scam customer service numbers at the top of search rankings.
Generative AI services which search live websites to deliver search results are regulated under the Online Safety Act. The Act also lists fraud as a priority offence, requiring companies to minimise its prevalence on their platforms and swiftly remove content when it appears. Ofcom have strong powers to ensure compliance.
The OSA is part of the solution, and the department continues to work with the Home Office as it prepares the new Fraud Strategy.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the copyright and transparency implications of major booksellers selling fiction generated by artificial intelligence systems.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
It is a matter for individual booksellers how they source books. However, the Government recognises the importance of clarity for right holders and consumers in understanding the origin of AI generated content.
We are currently preparing a report on copyright and artificial intelligence, for publication next year. This report will take into account a range of views and evidence.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the impact of employers adopting AI systems on the labour market, and what steps they are taking to ensure workers are equipped with the skills required by the labour market.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Get Britain Working White Paper sets out how we will address key labour market challenges and spread opportunity in order to fix the foundations of our economy so we can make the most of the opportunities AI presents. The Government is supporting workforce readiness for AI through a range of initiatives.
The new AI Skills Hub, developed by Innovate UK and PwC, provides streamlined access to digital training. This will support government priorities through tackling critical skills gaps and improving workforce readiness. We are also partnering with 11 major companies to train 7.5 million UK workers in essential AI skills by 2030 and expanding AI education in universities, by launching Pioneer Fellowships for cross-disciplinary upskilling.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to encourage professional services firms to adopt artificial intelligence productivity tools.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government is committed to driving AI adoption across the economy, including professional services. Through the AI Opportunities Action Plan, we are tackling barriers such as lack of awareness, trust, and technical capability. This includes expanding the BridgeAI programme (which supports organisation adopt AI with funding and hands-on support), announcing an AI champion for professional business services, and training 7.5 million workers across the economy in essential AI skills by 2030. We are also investing £11 million to grow the UK’s AI assurance market, ensuring firms can adopt tools confidently and responsibly. These measures will help businesses harness AI to boost productivity and maintain the UK’s global competitiveness.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they will take to support job creation and infrastructure development in the AI Growth Zone in South Wales.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government is establishing AI Growth Zones (AIGZs) to deliver the infrastructure needed for the UK to develop and deploy advanced AI at scale.
Following the announcement of the fourth AI Growth Zone in South Wales, we are working with national and regional government, businesses and local skills providers to address key barriers to investment in the area and accelerate benefits for communities across South Wales. Our AI Growth Zone policy package unlocks £5 million for each site to invest in local benefits and capitalise on the AI economy. This additional funding can support initiatives such as expanding data centre-focused skills pathways, creating more high-skilled, high-paying jobs and strengthening the local research environment.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the impact of artificial intelligence tools on employment levels, in the light of the finding in the Chartered Institute for Personnel and Development Labour Market Outlook report, published on 10 November, that 17 percent of UK employers expect to reduce their workforce due to AI tools in the coming year.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
We want to ensure people have access to good, meaningful work. AI has the potential to transform the labour market and Government is working to ensure the UK is well prepared, so that AI drives growth and opportunities for workers, businesses, and communities.
We are closely monitoring data on the impact of AI on the workforce, such as the CIPD report, and actively preparing for a range of scenarios. We are supporting workforce readiness for AI through multiple initiatives. Including our commitment to give 7.5 million workers essential AI skills by 2030.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to develop accessibility standards for AI-enabled assistive communication technologies used by people with disabilities.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Public Sector Bodies Accessibility Regulations require most public sector organisations to ensure their services, websites, intranets, extranets, published documents, and apps are accessible to disabled people by meeting the requirements of the Web Content Accessibility Guidelines v2.2 to level AA and by publishing a prescribed format accessibility statement. This includes requirements to work with assistive technologies. The regulations apply regardless of if the technology is AI enabled or not. The Government Service Standard requires Departments to make sure everyone can use the service. The standard doesn’t apply to the wider public sector.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of reports that children are using artificial intelligence chatbots for mental health advice, in particular with regard to online safety and child protection.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
This Government is committed to improving NHS mental health services to ensure that children and young people receive the right support at the right time for their mental health.
The Online Safety Act requires all in-scope services, including AI chatbots, to proactively remove illegal suicide and self-harm content. Services likely to be accessed by children must take steps to prevent children from accessing suicide, self-harm, or eating disorder content.
DHSC’s 10 Year Plan has set out an ambitious reform agenda to transform mental health services to improve access and treatment and promote good mental health and wellbeing for the nation.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to work with technology companies and child safety agencies on regulatory frameworks for detecting child protection risks in artificial intelligence systems.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
Illegal provisions have been in place since March, and additional protections for children since July of this year. Under the Online Safety Act, AI services that allow users to share content with one another or search live websites to provide results must protect all users from illegal content and children from harmful content.
The Government engages with a range of stakeholders on the impact of AI and will continue to act to address new and emerging AI harms. Through the Crime and Policing Bill we are introducing an offence to criminalise AI models which have been optimised to create child abuse material and have tabled amendments to support the stringent testing of AI systems for child sexual abuse material risks.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what steps they are taking to monitor the compliance of artificial intelligence systems deployed in the UK with standards on privacy, transparency and non-discrimination.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The UK’s data protection framework adopts a technology-neutral and principles-based approach, that applies to all organisations processing personal data, including those deploying Artificial Intelligence systems. The Information Commissioner’s Office (ICO), which is responsible for enforcing data protection laws, has taken steps to provide guidance on how data protection law applies specifically to AI systems, including through updates following recent generative AI consultation series. The ICO also has the power to investigate and impose penalties for non-compliance. Organisations deploying AI systems are required to ensure that any personal data is processed fairly, lawfully, transparently, and securely. Where legal or similarly significant decisions have been made about individuals based solely on automated processing, organisations must put in place safeguards to allow individuals to exercise their right to make representations about the decisions, contest that decision, and to obtain human intervention for it.