Social Media: Artificial Intelligence

(asked on 20th March 2023) - View Source

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, pursuant to the Answer of 24 January 2023 to Question 125342, whether the Online Safety Bill includes measures to help prevent the use of coding algorithms that may lead to increased racial stereotyping.


Answered by
Paul Scully Portrait
Paul Scully
This question was answered on 24th March 2023

Under the Online Safety Bill, all platforms will need to undertake risk assessments for illegal content, and services likely to be accessed by children will need to undertake a children’s risk assessment. This will ensure they understand the risks associated with their services, including in relation to their algorithms. They will then need to put in place proportionate systems and processes to mitigate these risks.

When deciding whether it is appropriate to recommend proactive technology, the regulator must have regard to the degree of accuracy, effectiveness and lack of bias achieved by the technology in question. This will help ensure that companies do not use algorithms that may lead to increased racial stereotyping when using proactive technologies to fulfil their safety duties.

More broadly, the Office for AI is working at pace to develop a White Paper setting out our position on governing and regulating AI to ensure the UK is seizing the opportunity presented by AI whilst addressing the potential risks the technology presents. This approach will establish a framework based on a set of cross-cutting principles to inform how regulators should tackle risks arising from issues such as racial bias in AI decision making. We will work with regulators such as EHRC to explore the practical implementation of our proposed AI regulatory framework alongside regulators’ existing duties.

The Centre for Data Ethics and Innovation’s work programme on Responsible Data Access includes a focus on helping organisations to obtain appropriate access to demographic data to assess potential risks of bias related to ethnicity and other demographic traits. This work follows the CDEI’s 2020 review into bias in algorithmic decision-making, which highlighted a range of legal, reputational, and practical barriers to accessing this data.

Reticulating Splines