UK Space Economy

Lord Clement-Jones Excerpts
Wednesday 11th March 2026

(1 week, 3 days ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

I share the enthusiasm of my noble friend, the committee and the report for the space economy. We responded in detail to the recommendations earlier this year. We are setting out our strategic priorities, which, as the committee and my noble friend highlighted, encompassed many aspects of our lives, including defence, economic growth and support for our farming communities. We will continue to focus our spend on the priorities of economic growth and national security outcomes.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, on the Government’s space plan, will they formally adopt a policy of space debris neutrality, requiring all satellites launched from the UK to have what is called a “designed to demise” commitment to prevent further orbital congestion? With an active debris removal procurement worth some £75 million, how are the Government ensuring that UK-based SMEs are not being edged out by larger international companies for these critical domestic contracts?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The noble Lord raises the important issue of space debris, which creates risks to our critical national infrastructure. We are strengthening UK space surveillance and investing in debris mitigation technologies. We are seen as a leader in space sustainability, including with the international community and His Majesty the King. We are supporting important UK companies such as Astroscale to understand the risks and costs of active debris removal. In fact, there are further announcements today on this important issue of space debris removal.

EU Digital Services Act and Regulation

Lord Clement-Jones Excerpts
Wednesday 11th March 2026

(1 week, 3 days ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

We have the Online Safety Act, which is enforced by Ofcom and other regulators and, as the noble Lord will know, we announced a consultation just recently on areas that we may seek to expand or take further measures on to enhance children’s well-being.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, does the Minister accept that, as part of this dialogue, close co-operation on robust competition enforcement is essential to resist growing US pressure to weaken digital rules? As the EU actively enforces its Digital Markets Act, will the Government commit to aligning in practice with strong EU enforcement standards rather than allowing US corporate lobbying to dilute the UK’s digital markets competition regime?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The UK has taken decisive action to strengthen competition and fairness in digital markets. In January 2025, Parliament equipped the CMA with new powers to boost competition and innovation in digital markets. In May, the Government issued a clear steer to the CMA to prioritise this work and align action with international jurisdictions, including the EU. The UK and CMA engage regularly with EU counterparts as both regimes begin operation to help maintain close alignment on emerging issues.

Superintelligent AI

Lord Clement-Jones Excerpts
Thursday 29th January 2026

(1 month, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I declare an interest as a consultant on AI regulation and policy for DLA Piper. I too thank the noble Lord, Lord Hunt of Kings Heath, for provoking an extremely profound and thoughtful debate on an international moratorium on superintelligent AI development. I was very interested that he cited the Warnock approach as one to be emulated in this field. That was certainly one that our House of Lords Artificial Intelligence Committee recommended eight years ago, but sadly it has not been followed.

For nine years, I have co-chaired the All-Party Parliamentary Group on Artificial Intelligence. I remain optimistic about AI’s potential, but I am increasingly alarmed about our trajectory, particularly in the field of defence. Superintelligence—AI surpassing human intelligence across all domains—is the explicit goal of major AI companies. Many experts predict that we could reach this within five to 10 years. In September 2025, Anthropic detected the first large-scale cyber espionage campaign using agentic AI. Yoshua Bengio, one of the godfathers of AI development, warns that these systems show “signs of self-preservation”, choosing their own survival over human safety.

Currently, no method exists to contain or control smarter-than-human AI systems. This is the “control problem” that Professor Stuart Russell describes: how do we maintain power over entities more powerful than us? That is why I joined the Global Call for AI Red Lines, which was launched at the UN General Assembly by over 300 prominent figures, including Nobel laureates and former Heads of State. They call for international red lines to prevent unacceptable AI risks, including prohibiting superintelligence development, until there is broad scientific consensus on how it can be done safely and with strong public buy-in.

ControlAI’s UK campaign, described by the noble Lord, Lord Hunt, is backed by more than 100 cross-party parliamentarians in the UK. Its proposals include banning deliberate superintelligence development, prohibiting dangerous capabilities, requiring safety demonstrations before deployment, and establishing licensing for advanced AI.

The Montreal Protocol on Substances that Deplete the Ozone Layer offers a precedent. In 1987, every country signed it within two years—during the Cold War. When threats are universal, rapid international agreements are possible. Superintelligence presents such a threat. Yet the current situation is discouraging. The US has rejected moratoria. Sixty countries signed the Paris AI Summit declaration in February 25, but the UK did not. Even Anthropic’s CEO, who has been widely quoted today, admits that we understand only 3% of how current systems work. Today, AI systems are grown through processes their creators cannot interpret.

The Government’s response has been inadequate. Ministers focus on regulating the use of AI tools rather than their development. But this approach fails fundamentally when facing superintelligence. Once a system surpasses human intelligence across all domains, we cannot simply regulate how it is used. We will have lost the ability to control it at all. You cannot regulate the use of something more intelligent than the regulator just sector by sector.

Our AI Security Institute, as the noble Lord, Lord Tarassenko, pointed out, has advisory powers only. We were promised binding regulation in July 2024, but we have seen neither consultation nor draft legislation. Growth and safety are not mutually exclusive. Without public confidence that systems are under human control, adoption will stall.

It is clear what the Government should do. The question is whether we will act with the seriousness this moment demands or whether competitive pressures will override the fundamental imperative of keeping humanity in control. I look forward to the Minister’s response.

TikTok: Bereaved British Parents

Lord Clement-Jones Excerpts
Tuesday 27th January 2026

(1 month, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The Government are aware of calls to make the data preservation process faster. These are new powers and we are actively monitoring the effectiveness of the current process, working closely with Ofcom to do this. We are carefully considering any means that could allow relevant data to be preserved in a timely manner to ensure investigations are well informed and families get the answers they need.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, the litigation alleges that TikTok’s algorithm deliberately promoted harmful content to children. That is exactly what we originally thought the Online Safety Act was going to help protect our children from, but that appears to be wrong. Will the Government, given their statement of strategic priorities, insert a statutory definition of safety by design and require Ofcom specifically to address addictive algorithms and compulsive design features?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The noble Lord will be aware of the Statement that the Technology Secretary made last week to initiate a short consultation looking at further measures that could be taken, which responds to some of the questions that underlie his question about the nature of social media use and actions that could be taken in response to parental and other requests to deal with it—for example, looking at breaks to stop excessive doomscrolling, or further enforcement of the law. That consultation will take place swiftly before the summer.

Superintelligent AI

Lord Clement-Jones Excerpts
Monday 26th January 2026

(1 month, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

My noble friend is right to mention the research of the AI Security Institute, which is advice the Government listen to and take very seriously. AI is a general-purpose technology with a wide range of applications, which is why the UK believes that the vast majority of AI should be regulated at the point of use. My noble friend is also right that collaboration with other countries is critical, and the UK’s approach is to engage with many other countries, and through the AI Security Institute with developers so that it has good insight into what is happening in development today.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I declare an interest as a consultant to DLA Piper on AI regulation and policy. In their manifesto, the Government promised

“binding regulation on … companies developing the most powerful AI models”,

yet, 18 months later, even in light of the harmful activities of stand-alone AI bots, we have seen neither the promised consultation nor any draft legislation. How can the Government credibly claim to be taking superintelligence seriously when they cannot get round even to publishing a consultation, let alone legislating?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

As I mentioned earlier, most AI systems are regulated by our existing expert regulators, and they are already acting. The ICO has released guidance on AI and data protection and the MHRA is taking action to allow a sandbox for AI as a medical device product. We are working with regulators to boost their capabilities as part of the AI opportunities action plan, and where we need to take action—for example, as we have under the Online Safety Act—we will do so. We do not speculate on legislation ahead of future parliamentary Sessions, but we will keep noble Lords updated should and when we bring forward a consultation ahead of any potential legislation.

Social Media: Non-consensual Sexual Deepfakes

Lord Clement-Jones Excerpts
Wednesday 14th January 2026

(2 months, 1 week ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

My Lords, the technological capabilities and their misuse that have prompted this Statement are, needless to say, deeply disturbing and demand our careful attention. The use of AI to generate non-consensual sexual imagery of women and children is both grotesque in itself, but also corrosive of trust in technology more broadly.

We therefore welcome the Secretary of State’s confirmation that new offences criminalising the creation or solicitation of such material will be brought into force this week. We support the enforcement of these laws. We also welcome Ofcom’s decision to open a formal investigation into the use of Grok on X under the Online Safety Act, an investigation that must proceed swiftly to protect victims and hold platforms to account.

Hard though it is to predict the misuses of emerging technologies, we must collectively find better ways to be ready for them before they strike. I fear there is a pervasive and damaging sense of regulatory, legislative and political uncertainty around AI. As long as that remains the case, we risk remaining a victim of events beyond our control.

From the outset of this Parliament, and indeed in opposition, the Government have pledged to legislate on AI. Reviews and policy documents, including the Clifford AI Opportunities Action Plan, promised a framework to drive adoption and regulatory clarity. However, we still have no clear timeline, nor even a clear account of the Government’s policy on AI.

It is worth noting that the legislative tools the Government are now relying on to implement their proposed new offences, such as the creation and solicitation of non-consensual intimate images, are the product of amendments introduced by this House to the Data (Use and Access) Act. Ministers have repeatedly argued both that binding AI regulation must come, and that the existing multi-regulator framework is sufficient.

Evidence to the House of Commons Science, Innovation and Technology Committee late last year confirmed that the Secretary of State would not commit to a specific AI Bill, instead speaking of considering targeted interventions rather than an overarching legislative framework. This may indeed be the right approach, but its unclear presentation and communication drive uncertainty that undermines confidence for investors, businesses and regulators, but above all for citizens.

Progress on other AI-related policy commitments seems to have stalled too. I do not underestimate the difficulty of the problem, but work thus far on AI and copyright has been pretty disappointing. I am not seeking to go into that debate now, but only to make the point that it contributes to a widespread sense of uncertainty about tech in general and AI in particular.

Frankly, this uncertainty has been compounded by inconsistent political messaging. Over the weekend, reports emerged that the Government were considering banning X altogether before subsequently softening that position, creating wholly unnecessary confusion. At the same time, the Government have mischaracterised X’s decision to move its nudification tools behind a paywall as a means to boost profits, when the platform argues, reasonably persuasively, that this is a measure to ensure that those misusing the tools cannot do so anonymously.

Nor has there been much effective communication from the Government about their regulatory intentions for AI. This leaves the public and businesses unclear on how AI will be regulated and what standards companies are expected to meet. Political and legislative uncertainty in this case is having real consequences. It weakens our ability to deter misuse of AI technologies; it undermines public confidence, and it leaves regulators and enforcement agencies in a reactive posture rather than being empowered to act with a clear statutory direction.

We of course support efforts to criminalise harmful uses of AI. However, under the Government’s current Sentencing Bill, most individuals convicted of these new AI-related offences against women and girls will be liable for only suspended sentences, meaning that they could leave court free to continue using the technology that enabled their crime. This is concerning. It cannot be right that someone found guilty of producing non-consensual sexual imagery may walk free, unrestrained and with unimpeded access to the tools that facilitated their offending.

As I say, we support Ofcom’s work and the use of existing powers, but law without enforcement backed by a coherent, predictable regulatory regime will offer little real protection. Without proper sentencing, regulatory certainty and clear legislative direction for AI, these laws will not provide the protection that we need.

We urge the Government to publish a clear statement on their intentions on comprehensive AI regulation, perhaps building on the AI White Paper that we produced in government, to provide clarity for both tech companies and the public, and to underpin the safe adoption of AI across the economy and society. We must assume that new ways to abuse AI are being developed as we speak. Either we have principled, strategic approaches to deal with them, or we end up lurching from one crisis to the next.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, we on the Liberal Democrat Benches welcome the Secretary of State’s Statement, as well as her commitment to bring the new offence of creating or requesting non-consensual intimate images into force and to make it a priority offence. However, why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?

My Commons colleagues have called for the National Crime Agency to launch an urgent criminal investigation into X for facilitating the creation and distribution of this vile and abusive deepfake imagery. The Secretary of State is right to call X’s decision to put the creation of these images behind a paywall insulting; indeed, it is the monetisation of abuse. We welcome Ofcom’s formal investigation into sexualised imagery generated by Grok and shared on X. However, will the Minister confirm that individuals creating and sharing this content will also face criminal investigation by the police? Does the Minister not find it strange that the Prime Minister needs to be reassured that X, which is used by many parliamentarians and government departments, will comply with UK law?

While we welcome the move to criminalise nudification apps in the Crime and Policing Bill, we are still waiting for the substantive AI Bill promised in the manifesto. The Grok incident proves that voluntary agreements are not enough. I had to take a slightly deep breath when I listened to what the noble Viscount, Lord Camrose, had to say. Who knew that the Conservative Party was in favour of AI regulation? Will the Government commit to a comprehensive, risk-based regulatory framework, with mandatory safety testing, for high-risk models before they are released to the public, of the kind that we have been calling for on these Benches for some time? We need risk-proportionate, mandatory standards, not voluntary commitments that can be abandoned overnight.

Will the Government mandate the adoption of hashtagging technology that would make the removal of non-consensual images possible, as proposed by the noble Baroness, Lady Owen of Alderley Edge, in Committee on the Crime and Policing Bill—I am pleased to see that the noble Lord, Lord Hanson, is in his place—and as advocated by StopNCII.org?

The Secretary of State mentioned her commitment to the safety of children, yet she has previously resisted our calls to raise the digital age of consent to 16, in line with European standards. If the Government truly want to stop companies profiteering from children’s attention and data, why will they not adopt this evidence-based intervention?

To be absolutely clear, the creation and distribution of non-consensual intimate images has nothing whatever to do with free speech. These are serious criminal offences. There is no free speech right to sexually abuse women and children, whether offline or online. Any attempt to frame this as an issue of freedom of expression is a cynical distortion designed to shield platforms from their legal responsibilities.

Does the Minister have full confidence that Ofcom has the resources and resolve to take on these global tech giants, especially now that it is beginning to ramp up the use of its investigation and enforcement powers? Will the Government ensure that Ofcom uses the full range of enforcement powers available to it? If X continues to refuse compliance, will Ofcom deploy the business disruption measures under Part 7, Chapter 6 of the Online Safety Act? Will it seek service restriction orders under Sections 144 and 145 to require payment service providers and advertisers to withdraw their services from the non-compliant platform? The public expect swift and decisive action, not a drawn-out investigation while the abuse continues. Ofcom must use every tool Parliament has given it.

Finally, if the Government believe that X is a platform facilitating illegal content at scale, why do they continue to prioritise it for official communications? Is it not time for the Government to lead by example and reduce their dependence on a platform that seems ideologically opposed to the values of decency and even perhaps the UK rule of law, especially now that we know that the Government have withdrawn their claim that 10.8 million families use X as their main news source?

AI technologies are developing at an exponential rate. Clarity on regulation is needed urgently by developers, adopters and, most importantly, the women and children who deserve protection. The tech sector can be a force for enormous good, but only when it operates within comprehensive, risk-proportionate regulatory frameworks that put safety first. We on these Benches will support robust action to ensure that that happens.

Baroness Lloyd of Effra Portrait The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
- View Speech - Hansard - - - Excerpts

I thank both noble Lords for their contributions to the debate. We all agree that the circulation of these vile, non-consensual deepfakes has been shocking. Sexually manipulating images of women and children is despicable and abhorrent. The law is clear: sharing or threatening to share a deepfake intimate image without consent, including images of people in their underwear, is a criminal offence. To the noble Lord’s point, individuals who share non-consensual sexual deepfakes should expect to face the full extent of the law. In addition, under the Online Safety Act, services have duties to prevent and swiftly remove the content. If someone has had non-consensual intimate images of themselves created or shared, they should report it to the police, as these are serious criminal offences.

I turn to some of the points that have been raised so far. The Government have been very clear on their approach in terms of both the AI action plan and the legislation that we have brought forward. We have introduced a range of new AI-related measures in this Session to tackle illegal activity; we have introduced a new criminal offence to make it illegal to create or alter an AI model to create CSAM; we are banning nudification apps; and we are introducing a new legal defence to make it possible for selected experts to safely and securely test models for CSAM and non-consensual intimate images and extreme pornography vulnerabilities.

AI is a general-purpose technology with a wide range of applications, which is why we think that the vast majority of AI systems should be regulated at the point of use. In response to the AI action plan, the Government are committed to working with regulators to boost their capabilities. We will legislate where needed and where we see evidence of the gaps. Our track record so far has shown that that is what we do, but we will not speculate, as ever, on legislation ahead of future parliamentary Sessions.

I come to the question of Ofcom enforcement action. On Ofcom’s investigation process, the Secretary of State was clear that she expects an update from Ofcom on next steps as soon as possible and expects Ofcom to use the full legal powers that Parliament has given it to investigate and take the action that is needed. If companies are found to have broken the law, Parliament has given Ofcom significant enforcement measures. These include the power to issue fines of up to 10% of a company’s qualifying worldwide revenue and, in the most serious cases, Ofcom can apply for a court order to impose serious business disruption measures. These are all tools at Ofcom’s disposal as it takes forward its investigations. On the question of whether Ofcom has the resources to investigate online safety, as I think I have mentioned in the House before, Ofcom has been given additional resources year on year to undertake its duties in respect of enforcing the Online Safety Act: that is, I think, £92 million, which is an uplift on previous years.

I come to the question of the Government’s participation in news channels and on X. We will keep our participation under review. We do not believe that withdrawing would solve the problems that we have seen. People get their news from sources such as X and it is important that they hear from a Government committed to protecting women and girls. It is important that they hear what we are doing and hear when we call out vile actions such as these. We think it is extremely important to continue to take action and continue to back Ofcom in the actions that it is taking in respect of this investigation, and in fact all of its investigations under the Online Safety Act.

The noble Lord asked whether it should be mandatory for AI developers to test whether their models can produce illegal material. Enabling AI developers to test for vulnerabilities in their models is essential for improving safeguards and ensuring that they are robust and future-proofed. At present, such testing is voluntary, but we have been clear that no option is off the table when it comes to protecting UK users, and we will act where evidence suggests that further action can be effective or necessary. We are keeping many of the areas that have been raised today under review and we are seeking further evidence. We are looking at what is happening in other jurisdictions and at what is happening here and we will continue to take action.

I also reflect on the point that the noble Lord made that the issues around enforcing illegal activity are nothing to do with free speech. These are entirely separate issues and it is incredibly important to note that this is not about restricting free speech, but about upholding the law and ensuring that the standards that we expect offline are held online. Many tech companies are acting responsibly and making strong endeavours to comply with the Online Safety Act, and we welcome their engagement on that. We need to make sure that our legislation and our enforcement is kept up to date with the great strides in technology that are happening. This means that, in some cases, we will be looking at the real-life impact and taking measures where new issues arise. That is the track record that we have shown and that is what we will continue to do.

Technology Adoption Review

Lord Clement-Jones Excerpts
Monday 15th December 2025

(3 months ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The noble Lord is absolutely right that we need to take action on a number of fronts, including AI literacy and digital skills more generally. The Government are taking action on digital skills in a number of areas, including through what was the CyberFirst programme and is now the TechFirst programme, looking at both young people and students.

On AI skills, particularly for those in the workforce, the Prime Minister announced a plan to train 7.5 million workers with essential AI skills by 2030 through our industry partnership with key players. It is great to have those players collaborating with us on that.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, the Technology Adoption Review is clear that the UK’s ability to turn research excellence into productivity gains depends on skills and access to world-class talent across our innovation system. In light of Sir Paul Nurse’s recent warnings that high visa fees and restrictive rules are actively deterring early career researchers and damaging the UK’s science base, will the Government commit to aligning research visa policy with their technology adoption ambitions, say, by emulating the Canada Global Impact+ Research Talent Initiative?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The noble Lord is right that attracting high-calibre talent to this country is incredibly important. We have a number of ongoing initiatives to do that, including the Global Talent Taskforce, as well as through academia, as my noble friend the Minister with responsibility for science and technology talked about. The digital skills jobs plan will also set out how we can support that aim and get the balance right between growing homegrown talent and attracting those we need to from abroad, so that we have the best chances of growing our science base and the spin-outs.

Children: Age Verification and Virtual Private Networks

Lord Clement-Jones Excerpts
Thursday 4th December 2025

(3 months, 2 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, the Minister says that the Government are standing right behind Ofcom. Many of us very strongly support Ofcom’s actions in fining those such as the AVS Group for not observing proper age checks on their sites. But, as the noble Lord, Lord Carlile, indicates, there is no point in having fines unless we have proper enforcement. What resource are the Government satisfied Ofcom has to pursue enforcement?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

We have ensured that Ofcom is resourced to implement its online safety duties and have increased the amount available to it year on year; its budget is, I think, £92 million to support all its Online Safety Act responsibilities. We believe that it has the resources it needs to effectively implement and supervise the Online Safety Act.

Data Adequacy Status: EU Data Protection Standards

Lord Clement-Jones Excerpts
Thursday 4th December 2025

(3 months, 2 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

I thank the noble Lord. He brings a great deal of experience over the years in many areas of data protection legislation, anti-money laundering and the security side. Since the UK and EU leaders’ summit on 19 May, we have been working with the EU to increase the safety and security of UK and EU citizens, to respond to shared threats, and to support police investigations, including through enhanced data exchange. We continue to work and meet closely with the EU on these matters.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, the Government are trying to hit a moving target, as far as I can see. The EU is adopting a new digital omnibus, which will change EU GDPR. How confident are the Government about being able to get a decision from the EU in time?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

To take that question in two parts, we are confident about the EU’s scrutiny of our legislation. The Commission has started its review and published the report that I mentioned in July. The European Data Protection Board published a non-legally binding opinion on its draft decision on 20 October. We are confident that a member state vote will take place ahead of the 27 December deadline. The EU’s proposals to change its data protection framework have only recently been published. We will have a look at the details of those changes as and when they become clear and are confirmed.

Artificial Intelligence Legislation

Lord Clement-Jones Excerpts
Monday 17th November 2025

(4 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- Hansard - - - Excerpts

I remind the House that AI is already regulated in the UK and we regulate on a context-specific approach. Our regulators can take account of the developments in AI, which are indeed rapid, and ensure that they are tailored. In addition, as noble Lords know, we have got various regulators undertaking regulatory sandboxes and the new proposal for the AI growth lab, which will look across all sectors and allow regulators to collaborate on this quite rapidly changing technological development.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I declare in interest as chair of the Authors’ Licensing and Collecting Society and as a consultant to DLA Piper on AI policy. The first meeting of the rather grandly named Lords’ AI and copyright parliamentary engagement group takes place tomorrow. Would it not be extraordinary if the Government did not bring forward a Bill in the face of that engagement group’s conclusions and those of the industry working groups? Would any of those discussions not be rendered meaningless without a Bill next year? If a Bill does not come forward, would that not demonstrate the influence of big tech and the major technology companies on the Government?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- Hansard - - - Excerpts

The issues to which the noble Lord refers have, of course, been extensively debated here. One outcome of conversations during the passing of the data Act was a commitment to have these discussions. I also think it would be premature to decide the nature or timing of legislation until those discussions are completed. Like the noble Lord, I highlight the importance of the parliamentary consultations, the first of which with Peers is indeed happening tomorrow, with the two Secretaries of State.