Deepfakes continue to arouse widespread fears, despite the growing number of positive applications … [+]
AFP via Getty Images
Deepfakes are your friend. Yes, deepfake technology has understandably become notorious in the wake of deepfake porn videos and the threat deepfakes seemingly pose to politics. However, the ability to generate realistic simulations using artificial intelligence will, on the whole, be only a positive for humanity.
Increasingly, new uses are being found for deepfakes. Good uses. Whether recreating long-dead artists in museums or editing video without the need for a reshoot, deepfake technology will allow us to experience things that no longer exist, or that have never existed. And aside from having numerous applications in entertainment and education, it’s being increasingly used in medicine and other areas.
In short, deepfakes work via the use of deep generative modelling. Basically, neural networks of algorithms learn how to create realistic looking images and videos of real (or fictitious) people after processing a database of example images. From being trained on images of a real person, they can then synthesise realistic videos of that person. Ultimately, the same technology can also be used to synthesise the same person’s voice, which has led to fears that we’re not far from fake yet entirely believable videos of politicians and celebrities doing or saying outrageous things.
But this is the worst-case scenario. Much more realistically, deepfake technology will play an increasingly constructive role in recreating the past and in envisioning future possibilities. This is already being borne out by an expanding range of examples.
Most recently, Reuters collaborated with AI startup Synthesia to create the world’s first synthesised, presenter-led news reports, using the same basic deepfakes technology to create new video reports out of pre-recorded clips of a news presenter. What was most novel about this is that, by using deepfake technology, you can automatically generate video reports personalised for each individual news viewer.
Aside from TV, deepfakes also have considerable potential in the art world. Last year, researchers at Samsung’s AI lab in Moscow were able to transform Da Vinci’s famed Mona Lisa into video, using deep learning to show the subject of the painting moving her eyes, head and mouth.
Likewise, Dalí Museum in St. Petersburg, Florida used deepfake technology last year as part of a new exhibition. Named Dalí Lives, it displayed a life-sized deepfake of the surrealist artist that had been created via 1,000 hours of machine learning of the artist’s old interviews. This deepfake recreation delivered a variety of quotes which Dalí had actually spoken or written over the course of his career.
Dalí’s words were spoken by an actor, but a Scottish company, CereProc, was able to train its own deepfake algorithms on audio recordings of former president John F. Kennedy. By training its deepfake technology in this way, the company was able to create ‘lost’ audio of the speech JFK was due to give in Dallas on November 22, 1963, the day he was assassinated.
The examples above show how deepfakes can serve to help bring history and art ‘alive’ for a wider audience. And if this helps to get thousands or millions of people interested in art and history, then the world can only benefit.
Other positive uses are emerging in education and entertainment. A UK-based health charity used deepfake technology to have David Beckham delivering an anti-malaria message in nine languages last year. Meanwhile, the likes of Nvidia on working on using AI-based deepfakes technology to create the graphics for video games.
And moving beyond image-based media, the underlying machine learning technology of deepfakes will have beneficial impacts on other areas. In medicine, for instance, UCL AI professor Geraint Rees predicts that “the development of deep generative models raises new possibilities in healthcare.” One such possibility is the use of deep learning to synthesise realistic data that will help researchers to develop new ways of treating diseases without usual actual patient data.
Work in this area has already been carried out by Nvidia, the Mayo Clinic and the MGH & BWH Center for Clinical Data Science, which in 2018 collaborated on using generative adversarial networks to create ‘fake’ brain MRI scans. The researchers found that, by training algorithms on these medical images and on 10% real images, these algorithms became just as good at spotting tumours as an algorithm trained only on real images.
Such applications are likely only the beginning of deep learning’s role in medicine, while synthesized data is currently being used in other sectors in order to protect privacy. For now, of course, deepfakes will continue to have a bad reputation, given that they’re most well-known for (potentially) threatening democracy. Still, if we can educate ourselves to trust only videos delivered by reputable sources (in the same way we trust only certain sources of text-based news), we’ll soon find that the good of deepfakes will more than outweigh the bad.