From face-swapping to 3D filters, digital effects are accessible in a way like never before. But what are these deepfakes that are taking the Internet by storm? How is this technology evolving and what does Artificial Intelligence (AI) have to do with it?
If you are active on social networks, you may have seen many apps and filters used to swap faces in images and videos. That technology has been around for many years but has rarely produced such credible results.
Today, there are several different ways to swap faces in a very realistic way. Not all of them use AI, but some do: deepfake is one of them.
What are deepfakes?
Deepfake generally refers to videos in which the face and/or voice of a person, usually a public figure, has been manipulated using artificial intelligence software in a way that makes the altered video look authentic.
Deepfakes are considered a source of concern because they are often used to intentionally mislead, such as making it look like a politician said something they didn't, or making it look like a celebrity was in a pornographic video they weren't in.
Deepfakes uses artificial intelligence (AI) machine learning technology developed by academics, hobbyists, and industry. Initial development occurred in the late 1990s, with significant advances made in the late 2010s.
This technology is based on sophisticated algorithms in which, to put it simply, one AI generates images of people and a second AI guesses whether the images are real or fake. In this way, AIs are getting better and better at what they are doing.
Automatic coders and generative confrontation networks are some of the technical names involved in these algorithms. Along with images, deepfakes can create real-sounding audio.
As a term, deepfake combines fake (because the media are fake, not genuine) and deep learning, a type of artificial neural network based on machine learning. These networks, again, to be simplistic, are like computer programs modeled after brains.
Forms of deepfakes have also targeted public figures, such as Facebook CEO Mark Zuckerberg and U.S. House Speaker Nancy Pelosi, leading to growing concerns about the great potential for misinformation and fraud.
In January 2020, Facebook announced a ban on deepfakes (except those that are parody or satire), although the company was criticized for not going far enough to combat what is being called cheap fakes. In general, they refer to content that has been altered with malicious intent.
We must differentiate deepfakes from fake news or fake news. These do not use images or audio.
How to Detect Deepfakes?
Deepfakes are difficult for the untrained eye to detect because they can be quite realistic. Whether they are used as personal weapons of revenge, to manipulate financial markets, or to destabilize international relations, videos showing people doing and saying things they never did or said are a fundamental threat to the idea that "seeing is believing."
Most deepfakes are done by showing the computer algorithm many images of a person and then having it use what it saw to generate new facial images. At the same time, their voice is synthesized, so it looks and sounds as if the person has said something new.
There are several works to detect fake videos or audios. We can detect them using several systems.
Finding faults
Deepfakes can be detected with bugs that the fakers cannot easily fix.
When a fake video synthesis algorithm generates new facial expressions, the new images do not always match the exact positioning of the person's head, or lighting conditions, or distance to the camera.
To make the fake faces blend in with the environment, they must be geometrically transformed: rotated, resized, or distorted. This process leaves digital artifacts in the resulting image.
You may have noticed some particularly severe transformation artifacts. These can make a photo look altered, such as blurred edges and artificially smooth skin. More subtle transformations still leave evidence, and an algorithm can be taught to detect this, even when people can't see the differences.
Blinking
One way to detect deepfakes is to look at how many times a person blinks. We all blink once every three to six seconds and each blink lasts about three-tenths of a second.
When we are facing a fake video, the person blinks less times than if it were real because the algorithm through which the fake is made cannot blink at the same speed as a human.
Neck and face
Deepfakes are mainly face substitutions since substituting a body is much more complicated. Therefore, we must look at the body of the person whose face has been replaced and, if the characteristics of that body do not match those of the real person, we are dealing with a forgery.
Short duration
Almost all deepfakes shared last very few seconds because the learning process to make fakes takes a lot of work. Therefore, if we see a video that is too short with content that is hard to believe, we may think that we are dealing with a deepfake.
Origin of the recording
When it comes to detecting a deepfake, it also helps to find the first person who shared the video. That way we can verify the context in which that publication was produced and if the source material had more details.
Sound
Many times, the algorithm that modifies the recording does not adjust the sound. We can realize that we are facing a fake when the sound does not match the image because there is not a correct synchronization with the movement of the lips.
Details
It is also important to be aware of the details of the recording. If we play the video at a slower speed, we will notice sudden changes in the image or changes in the background of a person that will prove that it is a fake.
Inside the mouth
Machine learning algorithms are still unable to exactly copy the tongue, teeth, or the inside of the mouth when speaking. It is important to look at the details, a tiny blur on the inside of the mouth can prove that the recording is fake.
To stay informed of the latest technology and artificial intelligence trends that every business team needs to stay one step ahead of cyber danger, go to aciesdecision.com.
Comments