(1 day, 7 hours ago)
Public Bill Committees
The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Kanishka Narayan)
Q
Jen Ellis: Again, that is a hugely complex question to cover in a short amount of the time. One of the challenges that we face in UK is that we are a 99% small and mediums economy. It is hard to think about how to place more burdens on small and medium businesses, what they can reasonably get done and what resources are available. That said, that is the problem that we have to deal with; we have to figure out how to make progress.
There is also a challenge here, in that we tend to focus a lot on the behaviour of the victim. It is understandable why—that is the side that we can control—but we are missing the middle piece. There are the bad guys, who we cannot control but who we can try to prosecute and bring to task; and there are the victims, who we can control, and we focus a lot on that—CSRB focuses on that side. Then there is the middle ground of enablers. They are not intending to be enablers, but they are the people who are creating the platforms, mediums and technology. I am not sure that we are where we could be in thinking about how to set a baseline for them. We have a lot of voluntary codes, which is fantastic—that is a really good starting point—but it is about the value of the voluntary and how much it requires behavioural change. What you see is that the organisations that are already doing well and taking security seriously are following the voluntary codes because they were already investing, but there is a really long tail of organisations that are not.
Any policy approach, legislation or otherwise, comes down to the fact that you can build the best thing in the world, but you need a plan for adoption or the engagement piece—what it looks like to go into communities and see how people are wrestling with this stuff and the challenges that are blocking adoption. You also need to think about how to address and remove those challenges, and, where necessary, how to ensure appropriate enforcement, accountability and transparency. That is critical, and I am not sure that we see a huge amount of that at the moment. That is an area where there is potential for growth.
With CSRB, the piece around enforcement is going to be critical, and not just for the covered entities. We are also giving new authorities to the regulators, so what are we doing to say to them, “We expect you to use them, to be accountable for using them and to demonstrate that your sector is improving”? There needs to be stronger conversations about what it looks like to not meet the requirements. We should be looking more broadly, beyond just telling small companies to do more. If we are going to tell small companies to do more, how do we make it something that they can prioritise, care about and take seriously, in the same way that health and safety is taken seriously?
David Cook: To achieve the outcome in question, which is about the practicalities of a supply chain where smaller entities are relying on it, I can see the benefit of bringing those small entities in scope, but there could be something rather more forthright in the legislation on how the supply chain is dealt with on a contractual basis. In reality, we see that when a smaller entity tries to contract with a much larger entity—an IT outsourced provider, for example—it may find pushback if the contractual terms that it asks for would help it but are not required under legislation.
Where an organisation can rely on the GDPR, which has very specific requirements as to what contracts should contain, or the Digital Operational Resilience Act, which is a European financial services law and is very prescriptive as to what a contract must contain, any kind of entity doing deals and entering into a contract cannot really push back, because the requirements are set out in stone. The Bill does not have a similar requirement as to what a contract with providers might look like.
Pushing that requirement into the negotiation between, for example, a massive global IT outsourced provider and a much smaller entity means either that we will see piecemeal clauses that do not always achieve the outcomes you are after, or that we will not see those clauses in place at all because of the commercial reality. Having a similarly prescriptive set of requirements for what that contract would contain means that anybody negotiating could point to the law and say, “We have to have this in place, and there’s no wriggle room.” That would achieve the outcome you are after: those small entities would all have identical contracts, at least as a baseline.
Emily Darlington (Milton Keynes Central) (Lab)
Q
David Cook: The original NIS regulations came out of a directive from 2016, so this is 10 years old now, and the world changes quickly, especially when it comes to technology. Not only is this supply chain vulnerability systemic, but it causes a significant risk to UK and global businesses. Ransomware groups, threat actors or cyber-criminals—however you want to badge that—are looking for a one-to-many model. Rather than going after each organisation piecemeal, if they can find a route through one organisation that leads to millions, they will always follow it. At the moment, they are out of scope.
The reality is that those organisations, which are global in nature, often do not pay due regard to UK law because they are acting all over the world and we are one of many jurisdictions. They are the threat vector that is allowing an attack into an organisation, but it then sits with the organisations that are attacked to deal with the fallout. Often, although they do not get away scot-free, they are outside legislative scrutiny and can carry on operating as they did before. That causes a vulnerability. The one-to-many attack route is a vulnerability, and at the moment the law is lacking in how it is equipped to deal with the fallout.
Jen Ellis: In terms of what the landscape looks like, our dialogue often has a huge focus on cyber-crime and we look a lot at data protection and that kind of thing. Last year, we saw the impact of disruptive attacks, but in the past few years we have also heard a lot more about state-sponsored attacks.
I do not know how familiar everyone in the room is with Volt Typhoon and Salt Typhoon; they were widespread nation-state attacks that were uncovered in the US. We are not immune to such attacks; we could just as easily fall victim to them. We should take the discovery of Volt Typhoon as a massive wake-up call to the fact that although we are aware of the challenge, we are not moving fast enough to address it. Volt Typhoon particularly targeted US critical infrastructure, with a view to being able to massively disrupt it at scale should a reason to do so arise. We cannot have that level of disruption across our society; the impacts would be catastrophic.
Part of what NIS is doing and what the CSRB is looking to do is to take NIS and update it to make sure that it is covering the relevant things, but I also hope that we will see a new level of urgency and an understanding that the risks are very prevalent and are coming from different sources with all sorts of different motivations. There is huge complexity, which David has spoken to, around the supply chain. We really need to see the critical infrastructure and the core service providers becoming hugely more vigilant and taking their role as providers of a critical service very seriously when it comes to security. They need to think about what they are doing to be part of the solution and to harden and protect the UK against outside interference.
David Cook: By way of example, NIS1 talks about reporting to the regulator if there is a significant impact. What we are seeing with some of the attacks that Jen has spoken about is pre-positioning, whereby a criminal or a threat actor sits on the network and the environment and waits for the day when they are going to push the big red button and cause an attack. That is outside NIS1: if that sort of issue were identified, it would not be reportable to the regulator. The regulator would therefore not have any visibility of it.
NIS2 and the Bill talk about something being identified that is caused by or is capable of causing severe operational disruption. It widens the ambit of visibility and allows the UK state, as well as regulators, to understand what is going in the environment more broadly, because if there are trends—if a number of organisations report to a regulator that they have found that pre-positioning—they know that a malicious actor is planning something. The footprints are there.
(1 month, 3 weeks ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
Kanishka Narayan
My hon. Friend brings deep expertise from her past career. If she feels there are particular absences in the legislation on equalities, I would be happy to take a look, though that has not been pointed out to me, to date.
The Online Safety Act 2023 requires platforms to manage harmful and illegal content risks, and offers significant protection against harms online, including those driven by AI services. We are supporting regulators to ensure that those laws are respected and enforced. The AI action plan commits to boosting AI capabilities through funding, strategic steers and increased public accountability.
There is a great deal of interest in the Government’s proposals for new cross-cutting AI regulation, not least shown compellingly by my right hon. Friend the Member for Oxford East (Anneliese Dodds). The Government do not speculate on legislation, so I am not able to predict future parliamentary sessions, although we will keep Parliament updated on the timings of any consultation ahead of bringing forward any legislation.
Notwithstanding that, the Government are clearly not standing still on AI governance. The Technology Secretary confirmed in Parliament last week that the Government will look at what more can be done to manage the emergent risks of AI chatbots, raised by my hon. Friend the Member for York Outer (Mr Charters), my right hon. Friend the Member for Oxford East, my hon. Friend the Member for Milton Keynes Central and others.
Alongside the comments the Technology Secretary made, she urged Ofcom to use its existing powers to ensure AI chatbots in scope of the Act are safe for children. Further to the clarifications I have provided previously across the House, if hon. Members have a particular view on where there are exceptions or spaces in the Online Safety Act on AI chatbots that correlate with risk, we would welcome any contribution through the usual correspondence channels.
Kanishka Narayan
I have about two minutes, so I will continue the conversation with my hon. Friend outside.
We will act to ensure that AI companies are able to make their own products safe. For example, the Government are tackling the disgusting harm of child sexual exploitation and abuse with a new offence to criminalise AI models that have been optimised for that purpose. The AI Security Institute, which I was delighted to hear praised across the House, works with AI labs to make their products safer and has tested over 30 models at the frontier of development. It is uniquely the best in the world at developing partnerships, understanding security risks, and innovating safeguards, too. Findings from AISI testing are used to strengthen model safeguards in partnership with AI companies, improving safety in areas such as cyber-tasks and biological weapon development.
The UK Government do not act alone on security. In response to the points made by the hon. Members for Ceredigion Preseli (Ben Lake), for Harpenden and Berkhamsted, and for Runnymede and Weybridge, it is clear that we are working closely with allies to raise security standards, share scientific insights and shape responsible norms for frontier AI. We are leading discussions on AI at the G7, the OECD and the UN. We are strengthening our bilateral relationships on AI for growth and security, including AI collaboration as part of recent agreements with the US, Germany and Japan.
I will take the points raised by the hon. Members for Dewsbury and Batley, for Winchester (Dr Chambers) and for Strangford, and by my hon. Friend the Member for York Outer (Mr Charters) on health advice, and how we can ensure that the quality of NHS advice is privileged in wider AI chatbot engagement, as well as the points made by my hon. Friend the Member for Congleton and my right hon. Friend the Member for Oxford East on British Sign Language standards in AI, which are important points that I will look further at.
To conclude, the UK is realising the opportunities for transformative AI while ensuring that growth does not come at the cost of security and safety. We do this through stimulating AI safety assurance markets, empowering our regulators and ensuring our laws are fit for purpose, driving change through AISI and diplomacy.