91爆料 researchers have developed new algorithms that solve a thorny challenge in the field of computer vision: of the person speaking those words.
As detailed in a to be presented Aug. 2 at , the team successfully generated of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics using audio clips of those speeches and existing weekly video addresses that were originally on a different topic.
鈥淭hese type of results have never been shown before,鈥 said an assistant professor at the 91爆料’s Paul G. Allen School of Computer Science & Engineering. 鈥淩ealistic audio-to-video conversion has practical applications like improving video conferencing for meetings, as well as futuristic ones such as being able to hold a conversation with a historical figure in virtual reality by creating visuals just from audio. This is the kind of breakthrough that will help enable those next steps.鈥
In a visual form of lip-syncing, the system converts audio files of an individual鈥檚 speech into realistic mouth shapes, which are then grafted onto and blended with the head of that person from another existing video.
The team chose Obama because the machine learning technique needs available video of the person to learn from, and there were hours of presidential videos in the public domain. 鈥淚n the future video, chat tools like Skype or Messenger will enable anyone to collect videos that could be used to train computer models,鈥 Kemelmacher-Shlizerman said.
Because streaming audio over the internet takes up far less bandwidth than video, the new system has the potential to end video chats that are constantly timing out from poor connections.
鈥淲hen you watch Skype or Google Hangouts, often the connection is stuttery and low-resolution and really unpleasant, but often the audio is pretty good,鈥 said co-author and Allen School professor . 鈥淪o if you could use the audio to produce much higher-quality video, that would be terrific.鈥
By reversing the process 鈥 feeding video into the network instead of just audio 鈥 the team could also potentially develop algorithms that could detect whether a video is real or manufactured.
The new machine learning tool makes significant progress in overcoming what鈥檚 known as the 鈥溾 problem, which has dogged efforts to create realistic video from audio. When synthesized human likenesses appear to be almost real 鈥 but still manage to somehow miss the mark 鈥 people find them creepy or off-putting.
鈥淧eople are particularly sensitive to any areas of your mouth that don鈥檛 look realistic,鈥 said lead author , a recent doctoral graduate in the Allen School. 鈥淚f you don鈥檛 render teeth right or the chin moves at the wrong time, people can spot it right away and it鈥檚 going to look fake. So you have to render the mouth region perfectly to get beyond the uncanny valley.鈥
Previously, audio-to-video conversion processes have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. By contrast, Suwajanakorn developed algorithms that can learn from videos that exist 鈥渋n the wild鈥 on the internet or elsewhere.
鈥淭here are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources. And these deep learning algorithms are very data hungry, so it鈥檚 a good match to do it this way,鈥 Suwajanakorn said.
Rather than synthesizing the final video directly from audio, the team tackled the problem in two steps. The first involved training a neural network to watch videos of an individual and translate different audio sounds into basic mouth shapes.
By combining from the team with a new mouth synthesis technique, they were then able to realistically superimpose and blend those mouth shapes and textures on an existing reference video of that person. Another key insight was to allow a small time shift to enable the neural network to anticipate what the speaker is going to say next.
The new lip-syncing process enabled the researchers to create realistic videos of Obama speaking in the White House, using words he spoke on a television talk show or during an interview decades ago.
Currently, the neural network is designed to learn on one individual at a time, meaning that Obama鈥檚 voice 鈥 speaking words he actually uttered 鈥 is the only information used to 鈥渄rive鈥 the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person鈥檚 voice and speech patterns with less data 鈥 with only an hour of video to learn from, for instance, instead of 14 hours.
鈥淵ou can鈥檛 just take anyone鈥檚 voice and turn it into an Obama video,鈥 Seitz said. 鈥淲e very consciously decided against going down the path of putting other people鈥檚 words into someone鈥檚 mouth. We鈥檙e simply taking real words that someone spoke and turning them into realistic video of that individual.鈥
The research was funded by Samsung, Google, Facebook, Intel and the 91爆料 Animation Research Labs.
For more information, contact the research team at audiolipsync@cs.washington.edu.