1 Lord Tarassenko debates involving the Department for Business and Trade

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Lord Tarassenko Excerpts
Lord Tarassenko Portrait Lord Tarassenko (CB)
- View Speech - Hansard - -

My Lords, it is a pleasure to follow the noble Baroness, Lady Lane-Fox. I agree with her points about implementation and upskilling the Civil Service. There is much that I want to say about automated decision-making, but I will focus on only one issue in the time available.

The draft Bill anticipates the spread of AI systems into ADM, with foundation models mentioned as components within the overall system. Large language models such as ChatGPT, which is probably the best-known example of a foundation model, typically operate non-deterministically. When generating the next word in a sequence, they sample from a probability distribution rather than always selecting the word with the highest probability. Therefore, ChatGPT will not always give the same response to the same query, as I am sure many noble Lords have discovered empirically.

Open AI introduced a setting in the API to its ChatGPT models last year to enable deterministic behaviour. However, there are other sources of non-determinism in the LLMs available from big-tech companies. Very slight changes in a query—for example, just in the punctuation or through the simple addition of the word “please” at the start—can have a major impact on the answer generated by the models.

The models are also regularly updated, and older versions are no longer supported. If any ADM system used by a public authority relies on a deprecated version of a closed-source proprietary AI system from a company such as Google or OpenAI, it will no longer be able to operate reproducibly. For example, when using ChatGPT, OpenAI’s newer GPT4 model will generate quite different outputs from GPT3.5 for the same input data.

I have given these brief examples of non-deterministic and non-reproducible behaviour to underline a very important point: the UK public sector will not be able to control the implementation or evolution of the hyperscale foundation models trained at great cost by US big tech companies. The training and updating of these models will be determined solely by the commercial interests of those companies, not by the requirements of the UK public sector.

To have complete control over training data, learning algorithms, system behaviour and software updates, the UK Government need to fund the development of a sovereign AI capability for public sector applications. This could be a set of tailor-made, medium-scale AI models, each developed by the relevant government department, possibly in partnership with universities or UK-based companies willing to disclose full details of algorithms and computer code. Only then will the behaviour of AI algorithms for ADM be transparent, deterministic and reproducible—requirements that should be built into legislation.

I welcome this Bill, but the implications of introducing AI models into ADM within the public sector need to be fully thought through. If we do not, we risk losing the trust of our fellow citizens in a technology that has the potential to deliver considerable benefits by speeding up and improving decision-making processes.