Skip to main content
alex-connock-web.jpg

The next threat to our elections

Monday 19 March 2018

SOCIAL media expert Alex Connock, visiting professor at the University of Salford Business School and ex Managing Director of Endemol Shine North, says we need to be aware of the threat that hyper-targeted video poses to our democracy.

In the next elections, voter manipulation may be nothing like 2016.  Which is not good.
If you need to know just how bad fake news got, try Robinson Meyer’s piece in this month’s The Atlantic, or the original Science magazine research article.   Amongst other depressing findings, it says that the worst offenders in propagating fake news on Twitter 2006-2016 weren’t even the bots at all – it was everyday people.  That meant that even fake negative stories about Trump in 2016 went more viral than true, positive ones about him.  (And he still won.)
Well don’t panic – all that can’t happen again.  With an FBI investigation and a UK parliamentary committee, now everyone’s on red alert for viral fake news stories.   
Just one thing – next time the problem could be completely different.  
Last time round the clever parties employed cutting edge digital marketing – and next time, the tech will be next-generation too.  Like every other consumer product, retail politics has tracked the real-time, lightning-fast global $2.7 trillion e-commerce business, and from there the innovations are coming. That means the opposite of mass fake news distribution next time.  It means hyper-targeted one-on-one manipulation.  
The retail political offers will be individualised to you.  There will be partisan political video stories direct to your inbox, ultra-personalised by real-time artificial intelligence editing.   You will see videos of politicians talking that are so well faked as to be indistinguishable from the real.  And ‘bad actors’ (for which, read bete noire du jour, like Russia, the alt left/right etc) could test democracy like never before.
Artificial intelligence
Visit an e-commerce website and your cookies and social browsing tell it you already spent time a month ago looking at a particular brand.  So this time, it will just play you the video to pitch a specific product.   If previously you responded well to female interlocutors, or a certain storytelling style, or mute videos, the system could now target those particular edits of films to you too.
In politics, it will be the same.  As AI in e-commerce marketing grows, you will get shown a unique political video edit according to your sex, class, postcode, even your taste in online supermarkets.  Videos won’t be monitored by news-watchers, because they will constantly mutate.   Factors like view time, response, engagement will show the video makers, or their AI, what bits to edit out, and what to edit in.    Gone will be the days of a single political broadcast.  Think more about 100 different variants, or even 100,000, each subtly changed for an individual viewer.  The same party could even send videos simultaneously to thousands of people with diametrically opposite political offers.
So policing the new wave of retail political video won’t be a question of watching the platforms and top-trending videos.  It will be about seeing into direct video messages in individual inboxes, and unique videos playing on individual social feeds.  
Fake video
It gets worse.
Algorithms are being devised to quickly create a moving, lifelike avatar of anyone from a still image.   At the  University of Washington (here’s the story) researchers made ex-President Obama's mouth match a completely made-up script.  They did it so well that they overcame the so-called ‘uncanny valley’ problem, where “synthesised human likenesses appeared to be almost real,” but “creepy and off-putting.”  Now that they’ve cracked it, fake video could be created with an MP apparently admitting on tape to accepting a bribe, or Emmanuel Macron backing a market-sensitive Brexit deal, or Kim Jong Un saying he’s launched a missile.   
That means in the next social media election, we won’t just be seeing bad actors and bots posting static mimes or short fake news videos with subtitles like we did in 2016.   We could see Instagram, Facebook and YouTube feeds populated with highly realistic ersatz news videos with real people talking – but which happen to be fake.  And automated bots will be sending out individually targeted versions of those fake videos.
Real time unreal
Not only that.   Because the technology will be working in real time – it will allow bad actors to voice political video using other people’s facial features live, too.  It won’t be enough to ask the larger platforms to take fake news down over a period of days.  The content will have already gone out as a fake live stream. 
A recent LA Times article looked at how hard it is to spot fake video.  Watching for blood flow in the face can sometimes determine whether footage is real.   Slight imperfections on a pixel level can also reveal whether a clip is genuine - though even that can be outwitted.    The US Defense Advanced Research Projects Agency, DARPA, met in Menlo Park California to look at so-called Deep Fakes and try to catch them out.   The consensus was bleak.  They just couldn’t do it.  
It’s a technology worth putting in the next Black Mirror – or a fake version of the show.

Find out more

Sam Wood

0161 295 5361