Baroness Shields
Main Page: Baroness Shields (Conservative - Life peer)Department Debates - View all Baroness Shields's debates with the Ministry of Defence
(4 months ago)
Lords ChamberMy Lords, I offer my warmest congratulations to the new ministerial team and my support to this new Government at a time of unprecedented challenges in foreign affairs and defence. I rise today to address an issue of profound importance that strikes at the very heart of our democratic values and national sovereignty. Disinformation and divisive narratives, particularly those propagated through AI-enabled platforms, are undermining the sanctity of our democracy and the integrity of our global information ecosystem.
Our digital public square, comprised of leading social networks and communications platforms, has become a modern equivalent of town halls and community gatherings. These are spaces where ideas are shared, debates are held and public opinion is shaped. However, the very mechanisms that drive these platforms are now being exploited to manipulate free will, spread misinformation and sow division among our citizens.
Over the past decade, the entrenchment of a surveillance advertising business model has built these platforms into trillion-dollar companies, at the expense of quality journalism. Advertising, once the lifeblood of news sites, now flows primarily to these companies and their platforms, relegating fact-checked and researched journalism behind paywalls. This dynamic has transformed truth into a luxury good, accessible only to those who can afford it, while misinformation and disinformation spread freely. A recent Harvard study confirms this trend, highlighting a shift in how young people consume news, with 25% relying on YouTube, 25% on Instagram, and 23% on TikTok as their primary news source.
Jack Dorsey, the founder and former CEO of Twitter, spoke compellingly at the Oslo Freedom Forum last month about the dangers of unchecked social media algorithms manipulating our free will and undermining human agency:
“We are being programmed based on what we say we’re interested in, and we’re told through these discovery mechanisms what is interesting—and as we engage and interact with this content, the algorithm continues to build more and more of this bias”,
thereby deciding for us what we see. He warned that, soon, AI tools will know us better than we know ourselves and, either through design or by default, influence our thinking at a subconscious level. The risks are enormous.
Algorithmic manipulation is already dividing us. Provocative misinformation is exponentially amplified, while factually correct posts receive minimal exposure. Algorithms decide what we see and shape our perceptions of truth. They are designed to amplify extremes, provoke emotions and increase user engagement. Over time, these reinforcement mechanisms isolate and separate people into groups of “us versus them”, creating the conditions for political tensions to escalate into violence and civil unrest.
In the age of social media, we are conditioned to watch, like and share the very disinformation that undermines our democracy, making people vulnerable to manipulation and “influence operations” propagated by adversarial states. Recently, false narratives surfaced about the attempted assassination of former President Trump, suggesting the plot was either staged or orchestrated by government. These stories spread like wildfire, influencing millions and raising critical questions about the impact of algorithms and divisive rhetoric on public discourse.
We simply cannot accept a world where forces beyond our control or understanding are programming our thoughts and feelings. Privacy, free speech and the exercise of free will—fundamental values in a liberal democracy—must be protected by regulatory frameworks that ensure transparency and choice in algorithmic processes and preserve our individual autonomy.
The timing of our discussion today is critical. We find ourselves midway through a global election cycle that will see 50% of the world’s population go to the polls in 73 national elections. Much of the media attention has been on the technological capabilities of AI to generate sophisticated disinformation in the form of audio and video deepfakes. Little attention, though, has been paid to algorithmic amplification and viral distribution of this content, which erodes social cohesion and poses immediate threats to the integrity of our information ecosystem.
For more than a decade, Governments have attempted to work with big tech on voluntary compliance to detect and remove harmful content, but the incentives to co-operate remain misaligned. While we have responded with legislation like the UK’s Online Safety Act and the EU’s AI Act, Governments are reacting urgently to harms and crimes without addressing the underlying causes—the very business models these companies are built on. It is time for us to take a step back and re-evaluate our approach.
The UK, through the efforts of the last Government, demonstrated leadership in addressing AI safety concerns, exemplified by the global AI Safety Summit in Bletchley Park and the subsequent establishment of the AI Safety Institute. These initiatives align with warnings from experts about potential existential risks from advanced AI systems, especially in the areas of critical infrastructure and autonomous weapons. While discussions about future AI capabilities must continue, the most advanced AI systems in operation today—those that have driven our digital economy for over a decade—remain largely unregulated. The stakes could not be higher. We must act now to safeguard the future of democratic discourse and ensure that emerging technology serves humanity, not the other way around.
To achieve this, I ask the Government to elevate these issues to the forefront of our foreign policy agenda.
I remind the noble Baroness of the remarks of my noble friend the Chief Whip. Could she start winding up?
Okay—I am nearly finished.
The United Kingdom has the opportunity to lead by example, advocating for a global governance framework that upholds the integrity of our information ecosystem, protects our free will and human agency and preserves our democratic values.