UK 'guinea pig' for election security before landmark votes

UK 'guinea pig' for election security before landmark votes

The rise of AI poses a new threat to election integrity
The rise of AI poses a new threat to election integrity. Photo: ATTILA KISBENEDEK / AFP
Source: AFP

The UK general election is being watched closely after stark warnings that rapid advancements in cyber-tech, particularly AI, and increasing friction between major nations threaten the integrity of 2024's landmark votes.

"These rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponised to discriminate, disinform and divide," the head of Amnesty International Agnes Callamard said in April.

The UK election on July 4 -- four months before the United States -- will be seen as the "guinea pig" for election security, said Bruce Snell, cyber-security strategist at US firm Qwiet AI, which uses AI to prevent cyber-attacks.

While AI has grabbed most of the headlines, more traditional cyber-attacks remain a major threat.

"It's misinformation, it's disruption of parties, it's leakage of data and attacking specific individuals," said Ram Elboim, head of cyber-security firm Sygnia and a former senior operative at Israel's 8200 cyber and intelligence unit.

Read also

Microsoft faces heat from US Congress over cybersecurity

State actors are expected to be the main threat, with the UK already issuing warnings about China and Russia.

"The main things are maybe to promote specific candidates or agendas," said Elboim.

"The second is creating some kind of internal instability or chaos, something that will impact the public feeling."

The UK has an advantage over the United States due to the short time period between announcing and holding the election, giving attackers little time to develop and execute plans, said Elboim.

It is also less vulnerable to attacks on election infrastructure as voting is not automated, he added.

Deepfakes

But hacking of institutions remains a threat, and the UK has already accused China of being behind an attack on the Electoral Commission.

Ram Elboim, CEO of US-based cyber-security company Sygnia, warned that traditional cyber-attacks remained a threat
Ram Elboim, CEO of US-based cyber-security company Sygnia, warned that traditional cyber-attacks remained a threat. Photo: Guy LAHAV / SYGNIA/AFP
Source: AFP

"You don't have to disrupt the main voting system," explained Elboim. "For example, if you disrupt a party, their computers or a third party that affects that party, that's something that might have an impact."

Read also

Pope Francis to weigh in on 'ethical' AI at G7 summit

Individuals are most at risk of being targeted, he added. Any embarrassing information could be used to blackmail candidates.

But it is more likely the attacker will simply leak information to shape public opinion or use the hacked account to impersonate the victim and spread misinformation.

Former Conservative party leader Iain Duncan Smith, a fierce Beijing critic, has already claimed that Chinese state actors have impersonated him online, sending fake emails to politicians around the world.

However, it is the increased scope for using AI to create and distribute misinformation that is the real unknown quantity in this year's elections, said Snell.

The spread of "deepfakes" -- fake videos, pictures or audio -- is of prime concern.

"The levels of potential for fakery are just tremendous. It's something that we definitely didn't have in the last election," said Snell, calling the UK a "guinea pig" for 2024's votes.

He highlighted software that can recreate someone's voice from a 30-second sample, and how that could be abused.

Read also

'Selective' UAE courting US, not China, on AI: minister

Labour's health spokesman Wes Streeting has said he was a victim of deepfake audio, in which he appeared to insult a colleague.

Bot farms

Snell advised authorities to focus on a "shortcut" solution of "getting awareness out there, having people understand that this is the issue".

Former Conservative leader Iain Duncan Smith, a vocal China critic, said he has been impersonated online
Former Conservative leader Iain Duncan Smith, a vocal China critic, said he has been impersonated online. Photo: Daniel LEAL / AFP
Source: AFP

Other software can be used to make fake pictures and videos, despite filters on many AI applications designed to prevent the depiction of real people.

"AI is, while very sophisticated, also extremely easy to fool" into creating images of real people, said Snell.

AI is also being used to create "bots", which automatically flood social media with comments to shape public opinion.

"The bots used to be really easy to spot. You'd see things like the same message being repeated and parroted by multiple accounts," said Snell.

"But with the sophistication of AI now... it's very easy to generate a bot farm that can have 1,000 bots and every one have a varying style of communication," he added.

Read also

French bosses fear far right's vague economic plans

While software already exists to check if videos and pictures have been generated using AI to a "high level of competency", they are not yet used widely enough to curb the problem.

Snell believes that the AI industry and social media firms should therefore take responsibility for curbing misinformation "because we're in a brave new world where the lawmakers have no idea what's going on".

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.