The Rise of Deep Fakes Video Altering Technology
In the age of fake news, if seeing is believing, then one thing we can trust is video footage. Unlike photo-shopping an image or making an outrageous claim in the written word, videos cannot be so easily adapted for spin or even downright disinformation. Or can they? These days, you may not always be able to be so sure. And the reason is the rise of so-called deep fakes. This is video altering technology which can be used to alter video footage in astonishing ways. For example, one leading Australian broadcaster used deep fakes to alter what was being said by a former Prime Minister, Malcolm Turnbull. In a warning to other broadcasters and news outlets, their experiment showed that we should no longer take at face value what somebody claims someone else has said. Crucially, this is now the case whether there appears to be documentary evidence to back up the claims, certainly if that is in the form of a video which could have been subject to the deep fake treatment.
What Is a Deep Fake Video?
A deep fake video makes use of automated face mapping technology so one face can be overlaid onto another in a way that appears to be very natural. In some cases, an actor with a similar mouth shape could be filmed saying words that were not in the original footage. This is subsequently mapped onto the original in a way that is entirely seamless. Although small amounts of pixelation may appear, these are barely noticeable when the faked video is uploaded onto video streaming services. It is not just mouths and words which can be altered. Some deep fakes use an entire face that is effectively pasted onto the body of another person. With this technique, it is possible to make it look like an individual is doing anything you like. The technology uses similar facial recognition software that you might find in any filter software that already widely available.
Can Deep Fakes Be Used for Fun?
Of course, a deep fake video is not necessarily a problem, especially if the technology is used in order to raise a smile. The technique can be used for harmless fun, such as changing the face of your your live dealer at the roulette table you are playing at to look like a celebrity, for example. Equally, you could use deep fakes to alter the lip movements of famous actors or politicians for comic effect. Comedians have long used voice-over techniques to make it appear like famous people are saying things they wouldn’t normally say to raise a few laughs.
Think of a comedy show which uses real footage and the voice over work of an impressionist or two. The whole viewing experience would be that much better for everyone concerned if the lines being read outmatched the lip movements of the people on the screen. Satirists have long used news footage for comic effect and deep fakes allow them to take their ideas to the next level. The same could be said of somebody who wants to create a meme to share online. So long as no one is actually claiming a politician is saying something they didn’t or that causes defamation, where is the harm? In comedy, like other areas of life, it comes down to context. In essence, deep fakes can be very funny but only when you know that the technology has been deployed whilst you’re watching
What Are The Potential Downsides of Deep Fakes?
The flip-side of using deep fakes for fun or for comedic purposes is that it is so convincing that some people can be fooled by it unless they are told it has been used. As said, this can sometimes be gleaned from the context even if it is not stated directly, usually because of the very comic effect that has been created. That said, how can you tell if someone uses deep fakes to spread disinformation or to back up a false claim that they had made elsewhere? The technology certainly raises questions for anyone who is already concerned about the veracity – or otherwise – of their news feeds.
Imagine how a world leader might make a speech in front of a press corps which includes video footage being taken. With deep fake lip-synching technology and a decent approximation of his or her voice, it would be possible to make him or her say anything. Politically controversial claims could be made. Rude or inappropriate comments might appear to be coming from the mouths of politicians these days without it being easy to spot if it is real or not. What if a politician really said something that he or she later regretted and then falsely claimed they hadn’t? The technology certainly raises lots of ethical questions.
And it doesn’t just impact on the world’s leaders. Consider how a rival business could ‘put words in the mouth’ of a competitor’s CEO in order to gain an edge or to denigrate them. Might the same situation also arise if a criminal wanted to use deep fakes in order to put the blame on someone else during an investigation? Even if rebuttals were raised after such phoney footage appeared, would we believe them if the original falsehood seemed plausible in the first place? The trouble is that many of these questioned are – as of now – unanswered. One thing we can say for sure, however, is that deep fakes are here to stay so we had better get used to them.