Deepfakes: Face Manipulation

Esin Gedik
2 min readJan 7, 2021

I am pretty sure that you must have seen modified videos of Barack Obama, Mark Zuckerberg or Donald Trump saying things they normally don’t say. Then you have witnessed the deepfake.

It shows two examples of deepfakes, “face swap” and “lip-sync,” which were produced. [1]

Deepfake technology enables to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do by using AI technology. Or they can create people who really don’t exist.

None of these people exist. These images were generated using deepfake technology. [2]

The sound can also be deepened to create voice appearances of public figures. For example, the head of a UK subsidiary of a German energy firm paid around £ 200,000 to a Hungarian bank account after being called by a person who imitated the German CEO’s deepfake voice.

Why are they doing this?

It is mainly used to smear women, famous people or politicians, or for fraudulent purposes.

So how is it made?

The core technology that makes deepfakes is the generative adversarial networks (GANs). Before the development of GANs, neural networks were adept at classifying existing content such as understanding speech or recognizing faces but not at creating new content. GANs gave neural networks the power not just to perceive, but to create.

Before starting to develop the model, photos of human faces are collected for creating a dataset. GANs use two separate neural networks which are generator and discriminator. The generator starts creating new images that mathematically resemble existing images in terms of pixels. Meanwhile, images are fed into discriminator without being told that they are from the original dataset or from the generator’s output. The two networks recursively run against each other. The generator is trying to fool the separator, trying to scramble the creations of the discriminating generator.

Resources

[1] Berkeley
[2] Forbes
[3] The Guardian

--

--