To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


View sample alert

Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Select Committee
Adobe
DED0020 - Defending Democracy

Written Evidence Mar. 26 2024

Inquiry: Defending Democracy
Inquiry Status: Closed
Committee: National Security Strategy (Joint Committee)

Found: ) in 2019 to develop technical standards for certifying the source and history of digital media


Non-Departmental Publication (Transparency)
Committee on Standards in Public Life

Feb. 23 2024

Source Page: CSPL 319th Meeting, Thursday 18 January 2024: Agenda and Minutes
Document: (webpage)

Found: recommendations of the report had been grouped into 8 themes: co-ordination and behaviour; political literacy


Lords Chamber
Media Bill
2nd reading - Wed 28 Feb 2024
Department for Digital, Culture, Media & Sport

Mentions:
1: Lord Holmes of Richmond (Con - Life peer) Bill I am interested that there is no mention whatever of media literacy, media competency and all those - Speech Link
2: Lord Storey (LD - Life peer) Government is likely to lead to an even greater decrease in trust.Turning now to media literacy, the - Speech Link


Written Question
Elections: Disinformation
Tuesday 30th January 2024

Asked by: Peter Kyle (Labour - Hove)

Question to the Home Office:

To ask the Secretary of State for the Home Department, what steps the Defending Democracy Taskforce is taking to reduce the potential threat of artificial intelligence generated deepfakes being used in elections.

Answered by Tom Tugendhat - Minister of State (Home Office) (Security)

The Government is committed to safeguarding the UK’s elections and already has established systems and processes in place, to protect the democratic integrity of the UK.

DSIT is the lead department on artificial intelligence and is part of the Defending Democracy Taskforce which has a mandate to safeguard our democratic institutions and processes from the full range of threats, including digitally manipulated content. The Taskforce ensures we have a robust system in place to rapidly respond to any threats during election periods.

Furthermore, the Online Safety Act places new requirements on social media platforms to swiftly remove illegal misinformation and disinformation - including artificial intelligence-generated deepfakes - as soon as they become aware of it. The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. This includes enabling users to establish the reliability, accuracy, and authenticity of content.

The new digital imprints regime, introduced by the Elections Act 2022, will also increase the transparency of digital political advertising (including artificial intelligence-generated material).

Finally, the threat to democracy from artificial intelligence was discussed at the AI Safety Summit in November 2023, reinforcing the Government’s commitment to international collaboration on this shared challenge.


Written Question
Internet: Disinformation
Tuesday 5th March 2024

Asked by: Dan Jarvis (Labour - Barnsley Central)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, whether Ofcom has had recent discussions with telecommunications companies on tackling online (a) misinformation and (b) disinformation.

Answered by Saqib Bhatti - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

Ofcom will have regular discussions with firms within its regulatory remit, details of those meetings are a matter for Ofcom as the independent regulator.

Under the Online Safety Act, Ofcom will have responsibility for regulating in-scope companies to ensure they are effectively taking action against illegal disinformation online and disinformation which intersects with the Act’s named categories of harmful content to children. These duties will come into force once Ofcom has completed its consultation and publication of the relevant Codes of Practice.

The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. These duties are already in force.

It is a matter for Ofcom to decide what information to publish in the discharge of its regulatory responsibilities.


Written Question
Internet: Disinformation
Tuesday 5th March 2024

Asked by: Dan Jarvis (Labour - Barnsley Central)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, how many cases of online (a) misinformation and (b) disinformation Ofcom has dealt with since the implementation of the Online Safety Act 2023; and if he will ask Ofcom to publish those figures regularly.

Answered by Saqib Bhatti - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

Ofcom will have regular discussions with firms within its regulatory remit, details of those meetings are a matter for Ofcom as the independent regulator.

Under the Online Safety Act, Ofcom will have responsibility for regulating in-scope companies to ensure they are effectively taking action against illegal disinformation online and disinformation which intersects with the Act’s named categories of harmful content to children. These duties will come into force once Ofcom has completed its consultation and publication of the relevant Codes of Practice.

The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. These duties are already in force.

It is a matter for Ofcom to decide what information to publish in the discharge of its regulatory responsibilities.


Written Question
Artificial Intelligence: Disinformation
Tuesday 6th February 2024

Asked by: Andrew Rosindell (Conservative - Romford)

Question to the Home Office:

To ask the Secretary of State for the Home Department, how many potential crimes involving AI deepfake programmes were reported in each of the last three years.

Answered by Tom Tugendhat - Minister of State (Home Office) (Security)

Generative artificial intelligence services have made it easier to produce convincing deepfake content and, whilst there are legitimate use cases this is also impacting a range of crime types.

The Home Office is working closely with law enforcement, international partners, industry and across Government to address the risks associated with deepfakes. This includes reviewing the extent to which existing criminal law provides coverage of AI-enabled offending and harmful behaviour, including the production and distribution of deepfake material using generative AI. If the review suggests alterations to the criminal law are required to clarify its application to AI-generated synthetic and manipulated material then amendments will be considered in the usual way.

The Online Safety Act places new requirements on social media platforms to swiftly remove illegal content - including artificial intelligence-generated deepfakes - as soon as they become aware of it. The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. This includes enabling users to establish the reliability, accuracy, and authenticity of content.

We have no current plans to ban services which generate deepfakes, however Government has been clear that companies providing AI services should take steps to ensure safety and reduce the risks of misuse. This was discussed at the Government’s AI Safety Summit in November 2023, reinforcing our commitment to international collaboration on this shared challenge.

Crime is recorded on the basis of the underlying offence, not whether a deepfake was involved, and we are therefore unable to provide a figure for deepfake-enabled crimes.

We are unable to provide figures for departmental spending as this is captured according to crime type, or broader work on artificial intelligence, and not broken down into activities specific to deepfakes.


Written Question
Artificial Intelligence: Disinformation
Tuesday 6th February 2024

Asked by: Andrew Rosindell (Conservative - Romford)

Question to the Home Office:

To ask the Secretary of State for the Home Department, whether his Department is taking steps to help tackle the rise in artificial intelligence generated deepfake crime.

Answered by Tom Tugendhat - Minister of State (Home Office) (Security)

Generative artificial intelligence services have made it easier to produce convincing deepfake content and, whilst there are legitimate use cases this is also impacting a range of crime types.

The Home Office is working closely with law enforcement, international partners, industry and across Government to address the risks associated with deepfakes. This includes reviewing the extent to which existing criminal law provides coverage of AI-enabled offending and harmful behaviour, including the production and distribution of deepfake material using generative AI. If the review suggests alterations to the criminal law are required to clarify its application to AI-generated synthetic and manipulated material then amendments will be considered in the usual way.

The Online Safety Act places new requirements on social media platforms to swiftly remove illegal content - including artificial intelligence-generated deepfakes - as soon as they become aware of it. The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. This includes enabling users to establish the reliability, accuracy, and authenticity of content.

We have no current plans to ban services which generate deepfakes, however Government has been clear that companies providing AI services should take steps to ensure safety and reduce the risks of misuse. This was discussed at the Government’s AI Safety Summit in November 2023, reinforcing our commitment to international collaboration on this shared challenge.

Crime is recorded on the basis of the underlying offence, not whether a deepfake was involved, and we are therefore unable to provide a figure for deepfake-enabled crimes.

We are unable to provide figures for departmental spending as this is captured according to crime type, or broader work on artificial intelligence, and not broken down into activities specific to deepfakes.


Written Question
Artificial Intelligence: Disinformation
Tuesday 6th February 2024

Asked by: Andrew Rosindell (Conservative - Romford)

Question to the Home Office:

To ask the Secretary of State for the Home Department, whether he has plans to outlaw the use of artificial intelligence deepfake programmes.

Answered by Tom Tugendhat - Minister of State (Home Office) (Security)

Generative artificial intelligence services have made it easier to produce convincing deepfake content and, whilst there are legitimate use cases this is also impacting a range of crime types.

The Home Office is working closely with law enforcement, international partners, industry and across Government to address the risks associated with deepfakes. This includes reviewing the extent to which existing criminal law provides coverage of AI-enabled offending and harmful behaviour, including the production and distribution of deepfake material using generative AI. If the review suggests alterations to the criminal law are required to clarify its application to AI-generated synthetic and manipulated material then amendments will be considered in the usual way.

The Online Safety Act places new requirements on social media platforms to swiftly remove illegal content - including artificial intelligence-generated deepfakes - as soon as they become aware of it. The Act also updates Ofcom’s statutory media literacy duty to require it to take tangible steps to prioritise the public's awareness of and resilience to misinformation and disinformation online. This includes enabling users to establish the reliability, accuracy, and authenticity of content.

We have no current plans to ban services which generate deepfakes, however Government has been clear that companies providing AI services should take steps to ensure safety and reduce the risks of misuse. This was discussed at the Government’s AI Safety Summit in November 2023, reinforcing our commitment to international collaboration on this shared challenge.

Crime is recorded on the basis of the underlying offence, not whether a deepfake was involved, and we are therefore unable to provide a figure for deepfake-enabled crimes.

We are unable to provide figures for departmental spending as this is captured according to crime type, or broader work on artificial intelligence, and not broken down into activities specific to deepfakes.


Select Committee
Online Safety Act Network
DED0033 - Defending Democracy

Written Evidence Mar. 26 2024

Inquiry: Defending Democracy
Inquiry Status: Closed
Committee: National Security Strategy (Joint Committee)

Found: literacy policy.