FAKEBOOK: HOW VIDEO MANIPULATION TECH COULD REVIVE FAKE NEWS
“YOU ARE FAKE NEWS!”
“Zero credibility, it’s all fake news.”
“The news is fake because so much of the news is fake”.
Sound familiar? These are the words of the infamous US President, Donald Trump. But, although he might be laughable, fake news is no joke.
The 2016 presidential campaign was a scary game of true or false. Remember when Hillary Clinton was so ill she was replaced by a body double? Or when Trump labelled Republican voters the dumbest in the country? These stories went viral during the campaign, despite being completely untrue.
Named 2017’s word of the year, ‘fake news’ seized the public’s attention due to the outpouring of fabricated stories appearing on social media. Critics pointed the finger at social media as the main propagator of false information, and put Twitter and Facebook right in the firing line.
Fighting back, tech firms have signed a code of conduct vowing to do more to tackle fake news, due to concerns that it can influence elections. Facebook, Google and Twitter have all signed the document agreeing to hire fact checkers, but a new wave of fraudulent behaviour might make their job harder than first imagined.
Using technology to create fake news
A new breed of fake news has risen up from video and audio manipulation technology that can, quite literally, put words into people’s mouths.
The technology behind this type of fake news has been developed by companies like PinScreen, Lyrebird and Face2Face.
By synthesising people’s voices, and using AI and computer graphics, it is now possible to create realistic footage of public figures saying, well, anything. A world leader could declare war, or admit they have an alcohol problem!
This is the future of fake news. We’re often told not to believe everything we read, but what about things we see or hear?
Software researched at the University of Washington is able to manipulate footage of public figures and, using audio from interviews, create a whole new speech that never took place. The fake Obama created by the University of Washington trolled the internet last year and is proof of how convincing the footage can be.
It involved taking over 14 hours of Obama clips, mainly sourced from YouTube, to create a fake speech. If the software got into the wrong hands, anyone who has a large portfolio of speeches or videos available online could be at risk.
In another example, Lyrebird has created fake voice technology. By using just 60 seconds of a human voice, the innovation can create a scarily accurate synthesised copy. It could easily convince humans that they’re listening to the real person.
Voice-morphing when combined with face manipulation, could create a dystopian future where we won’t be able to tell what is true or false online. Fake news can have real-life implications, from spreading malicious gossip about politicians, to disrupting the stock market: When a tweet went viral in 2008 claiming Steve Jobs had suffered a heart attack, Apple’s stock dropped 10 points.
Social media is at the root of these problems. With 82 per cent of young adults in the UK relying on social media for their daily news consumption, and tweets going viral in minutes, it is the easiest platform to spread fake videos.
Using technology to combat fake news
People and news organisations have a responsibility to scrutinise content, but what if the technology evolves beyond human detection?
In the US, the Defense Advanced Research Projects Agency (DARPA) is attempting to find a solution to the problem of fake videos on social media, creating its own software to try and detect the authenticity of video footage, by detecting inconsistencies in lighting and hair movement.
DARPA is calling on social media platforms to screen every video through its program before they are uploaded, however Facebook and Twitter have not discussed exactly what they are doing to stop fake videos spreading.
Other companies are attempting to tackle fake videos, including start-up company Truepic, who has received over $10 million in funding to start researching AI in videos.
Speaking to TechCrunch, Truepic CEO, Jeff McGregor said: “The internet has quickly become a dumpster fire of disinformation.
“Fraudsters have taken full advantage of unsuspecting consumers and social platforms facilitate the swift spread of false narratives, leaving over 3.2 billion people on the internet to make self-determinations over what’s trustworthy vs. fake online… we intend to fix that by bringing a layer of trust back to the internet.”
The future of fake news could lead to a digital revolution in content validation. As the battle of computers commences, let’s hope authentication comes out on top before social media becomes a minefield of ‘alternative facts’.
Comments are closed