All 2 Debates between Dean Russell and Richard Thomson

Tue 13th Jun 2023
Digital Markets, Competition and Consumers Bill (First sitting)
Public Bill Committees

Committee stage: 1st sitting & Committee stage & Committee stage & Committee stage

Digital Markets, Competition and Consumers Bill (First sitting)

Debate between Dean Russell and Richard Thomson
Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

How could we legislate create the framework by which the problem of fake reviews could be best addressed?

Rocio Concha: I think it needs to be in the list on schedule 18, and there is a very simple way to draft that amendment. We are going to suggest an amendment to help you with that, so I do not think that it is a major difficulty to include it on the face of the Bill.

Dean Russell Portrait Dean Russell
- Hansard - -

Q You are both at the coalface for consumers in terms of the challenges around all the issues addressed by the Bill. Can you briefly share some real-life examples of why the Bill is so important and what difference it will make to consumers?

Rocio Concha: I can give you some examples from the past so that you can see what consumers face. I already talked about the secondary ticketing problem, but I will give you another example. During covid, there were a lot of issues about people getting their refunds that they were entitled to by law. Many people could not really get them. I will give you another example on the digital side—that was on the consumer side.

At the moment, as you have heard from the CMA, digital advertising is basically controlled by two companies, Google and Facebook. Google has doubled its revenue from digital advertising since 2011 and Facebook used to make less than £5 per user—more recently, it has been around £50 per user. Google charges around 30% more for paid-for advertising than other search engines. All that cost translates into the products that we buy. We expect that once this pro-innovation, pro-competitive regulatory framework is put in place we will see it translate into prices.

We will also see it translate into more choice, in particular on data. At the moment, it is very difficult for consumers to have a choice on how much of our data is used for targeted advertising. You will have seen examples of that. When we talk to consumers in particular on the issues surrounding data, they feel disempowered. When we talk to consumers about the problems that they face in some of the markets where there are high levels of detriment, they also feel disempowered.

Matthew Upton: To be clear, there is a lot of good in the Bill. I echo Rocio’s first comments that there are a lot of positives. It has been a long time coming, and is a testament to the civil servants in the Department who have stuck with it. The main lens through which we see the impacts of the potential changes in the Bill is the cost of living. It is not exactly headline news that people are struggling with their bills. One of the main measures that we look at is whether one of our clients is in a negative budget: whether their income meets their essential outgoings. About 52% of our debt advice clients can no longer meet their essential—not desirable—outgoings with their income.

There are two areas where the Bill can make a real difference. One of the frustrations is that a debt adviser will go in detail through someone’s income and where they spend their money, helping them to balance their bills, and so on. You see the impact of other Government interventions, such as energy price support, putting money in their pockets and uprating benefits. You are combing through their expenditure and you find something like a subscription trial taking £10 a month—a huge amount for a lot of our clients—unnecessarily out of their account. They did not even know that it was there. Often, it is people who are not online, are not savvy, and are not combing their bills every month because they have a lot on. That is hugely frustrating, and things like this, especially if strengthened, could tackle that.

You will see similar things where people are just about balancing their monthly income with their expenditure and they get hit by some big scam bill or are let down by a company. Such companies are too often not held to account in the right way. It is a bit of a tangential example in some ways, but the hope is that the CMA’s increased ability to act and, in effect, to disincentivise poor behaviour towards consumers will lessen such instances as well.

Artificial Intelligence and the Labour Market

Debate between Dean Russell and Richard Thomson
Wednesday 26th April 2023

(1 year, 7 months ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

I thank the hon. Member for that intervention. He has perhaps read ahead towards the conclusion of my speech, but it is an interesting dichotomy. Obviously, I know my biography best, but there are people out there, not in the AI world—Wikipedia editors, for example—who think that they know my biography better than I do in some respects.

However, to give the example, the biography generated by AI said that I had been a director at the Scottish Environmental Protection Agency, and, prior to that, I had been a senior manager at the National Trust for Scotland. I had also apparently served in the Royal Air Force. None of that is true, but, on one level, it does make me want to meet this other Richard Thomson who exists out there. He has clearly had a far more interesting life than I have had to date.

Although that level of misinformation is relatively benign, it does show the dangers that can be presented by the manipulation of the information space, and I think that the increasing use and application of AI raises some significant and challenging ethical questions.

Any computing system is based on the premise of input, process and output. Therefore, great confidence is needed when it comes to the quality of information that goes in—on which the outputs are based—as well as the algorithms used to extrapolate from that information to create the output, the purpose for which the output is then used, the impact it goes on to have, and, indeed, the level of human oversight at the end.

In March, Goldman Sachs published a report indicating that AI could replace up to 300 million full-time equivalent jobs and a quarter of all the work tasks in the US and Europe. It found that some 46% of administrative tasks and even 44% in the legal professions could be automated. GPT-4 recently managed to pass the US Bar exam, which is perhaps less a sign of machine intelligence than of the fact that the US Bar exam is not a fantastic test of AI capabilities—although I am sure it is a fantastic test of lawyers in the States.

Our fear of disruptive technologies is age-old. Although it is true to say that generally what we have seen from that disruption is the creation of new jobs and the ability to allow new technologies to take on more laborious and repetitive tasks, it is still extremely disruptive. Some 60% of workers are currently in occupations that did not exist in 1940, but there is still a real danger, as there has been with other technologies, that AI depresses wages and displaces people faster than any new jobs can be created. That ought to be of real concern to us.

In terms of ethical considerations, there are large questions to be asked about the provenance of datasets and the output to which they can lead. As The Guardian reported recently:

“The…datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks”

as well as all sorts of content created by others, who do not get reward for its use; the entire proceedings of 16 years of the European Parliament; or even the entirety of the proceedings that have ever taken place, and been recorded and digitised, in this place. The datasets can be drawn from a range of sources and they do not necessarily lead to balanced outputs.

ChatGPT has been banned from operating in Italy after the data protection regulator there expressed concerns that there was no legal basis to justify the collection and mass storage of the personal data needed to train GPT AI. Earlier this month, the Canadian privacy commissioner followed, with an investigation into OpenAI in response to a complaint that alleged that the collection, use and disclosure of personal information was happening without consent.

This technology brings huge ethical issues not just in the workplace but right across society, but questions need to be asked particularly when it comes to the workplace. For example, does it entrench existing inequalities? Does it create new inequalities? Does it treat people fairly? Does it respect the individual and their privacy? Is it used in a way that makes people more productive by helping them to be better at their jobs and work smarter, rather than simply forcing them—notionally, at least—to work harder? How can we be assured that at the end of it, a sentient, qualified, empowered person has proper oversight of the use to which the AI processes are being put? Finally, how can it be regulated as it needs to be—beneficially, in the interests of all?

The hon. Member for Birkenhead spoke about and distributed the TUC document “Dignity at work and the AI revolution”, which, from the short amount of time I have had to scrutinise it, looks like an excellent publication. There is certainly nothing in its recommendations that anyone should not be able to endorse when the time comes.

I conclude on a general point: as processes get smarter, we collectively need to make sure that, as a species, we do not consequentially get dumber. Advances in artificial intelligence and information processing do not take away the need for people to be able to process, understand, analyse and critically evaluate information for themselves.

Dean Russell Portrait Dean Russell
- Hansard - -

This is one point—and a concern of mine—that I did not explore in my speech because I was conscious of its length. As has been pointed out, a speech has been given previously that was written by artificial intelligence, as has a question in Parliament. We politicians rely on academic research and on the Library. We also google and meet people to inform our discussions and debates. I will keep going on about my Turing clause—which connects to the hon. Gentleman’s point—because I am concerned that if we do not have something like that to highlight a deception, there is a risk that politicians will go into debates or votes that affect the government of this country having been deceived—potentially on purpose, by bad actors. That is a real risk, which is why there needs to be transparency. We need something crystal clear that says, “This is deceptive content” or “This has been produced or informed by AI”, to ensure the right and true decisions are being made based on actual fact. That would cover all the issues that have been raised today. Does the hon. Member share that view?

Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

Yes, I agree that there is a very real danger of this technology being used for the purposes of misinformation and disinformation. Our democracy is already exceptionally vulnerable to that. Just as the hon. Member highlights the danger of individual legislators being targeted and manipulated—they need to have their guard up firmly against that—there is also the danger of people trying to manipulate behaviour by manipulating wider political discourse with information that is untrue or misleading. We need to do a much better job of ensuring we are equipping everybody in society with critical thinking skills and the ability to analyse information objectively and rationally.

Ultimately, whatever benefits AI can bring, it is our quality of life and the quality of our collective human capital that counts. AI can only and should only ever be a tool and a servant to that end.