Have you ever thought about what happens when you are fooled by deepfakes? When you suddenly see a video of a well-known person, someone close to you, but in the clip they ask or say something unimaginable?
Deepfakes are slowly but surely making their way into society and the increasingly modern technology now makes it possible for anyone who really wants to create a corresponding deepfake. All it takes is the right software, some knowledge and enough time to work it out.
But what exactly are deepfakes? How dangerous are they becoming and what can you do in the future to protect yourself from deepfakes or at least recognize them as such as quickly as possible? We clarify and give practical tips.
What are deepfakes?
Before we fully dive into the topic, we would first like to explain what exactly deepfakes are. Deepfakes are usually faces, images, and videos created using artificial intelligence (AI) that have been manipulated so that the person seen there is actually someone else.
Very often, deepfakes of celebrities are currently in circulation, where IT experts have fun and present them in untypical situations. One example from the past is a video conference between Vitali Klitschko and Berlin’s Mayor Giffey. What still seems harmless and funny here reveals the real problem of deepfakes at the same time. They cannot be identified as such.
Especially not if a deepfake is not expected at all and the recipient is not very familiar with the technology. So if you don’t pay close attention to what you see in the deepfake, you’ll think it’s real. This is dangerous, in every way and from every perspective.
On display is not Tom Cruise, but Miles Fisher, a lesser-known actor who has made fun of appearing as Tom Cruise with his deepfakes.
Why are deepfakes so dangerous?
The actual dangerous thing about deepfakes is certainly their potential for deception. The same may not play too big a role in the Tom Cruise deepfake mentioned and shown above, but it goes much further. What if politicians talk this way and spread things they never said. Or when the secretary talks to the supposed boss of the company, who is actually just a deepfake.
Deepfakes could also be problematic for security when it comes to video identification, for example to open an account or authenticate official documents. Suddenly, anyone can imitate anyone else and, as a result, make business-damaging decisions or obtain classified information.
What currently still seems a bit far-fetched due to the large amount of work involved will quickly change. The hardware and technology is getting better and better, quantum computers are already waiting in the wings, and the technology behind the deepfakes themselves will continue to evolve.
Already, some TikTok or Instagram filters are so good that they can change our face live. So it won’t be long before deepfakes also appear much more mature and produce deceptively real results. Just like that, only with the smartphone. It’s creepy and it’s dangerous because it threatens everyone’s security and privacy. With Fawkes, there is at least an open source project to prevent unathorized facial recognition. What currently only IT experts have mastered will presumably be accessible to everyone in the future.
It will be exciting to see whether deepfakes can then be used to bypass biometric systems. This way, a Face ID could easily be unlocked via deepfake. Finally, slander and disinformation are the mildest of all attacks made possible by deepfakes.
How to detect deepfakes
Detecting deepfakes is not that easy. Currently, however, the technology is not yet fine enough to be unnoticeable. This means that there are always appropriate indications. You just need to pay attention to it specifically. The dangerous thing about deepfakes is that no one assumes they are looking at a deepfake when they can clearly see the face. So he doesn’t even take a closer look.
If it does, artifacts are often visible during face swapping, for example. So, where the face of the deepfake transitions to the actual person, impurities form, like just artifacts or color shifts, which should at least seem strange to you. The same applies to the eyes and teeth, which are often not displayed sharply due to the technology and therefore look a bit “muddy”.
The biggest clue to detecting deepfakes, however, is in a person’s facial expressions. Deepfakes require an incredible amount of calculations. The faster a deepfake was realized, the less mimic data it has. Moreover, such data are not available in their entirety. If you are skeptical during a video conference, you can ask your counterpart to show you his side face. As a viewer, it is then quickly discovered that the calculation of deepfakes currently still has limits and the lateral view of the face looks very unnatural.
So faces seem a little more lifeless than in reality, show less emotion, almost appear a little “stiff” or even “numb”. Moreover, if you know the person, you will simply notice a different type of facial expression that no longer matches the real person. But this is only recognized by someone who suspects the deepfake at all and sees the real person often.
However, since a deepfake usually includes a matching voice, there is another clue to unmask deepfakes as such. Artificially generated voices tend to be monotonous and often somewhat tinny or distorted. So, this is another thing to look out for if you want to spot a fake. Stupidly, distortions in video telephony, just like artifacts by the way, are more or less a part of it.
These security measures exist against AI-generated content
Side views of a person are particularly effective at the moment. The website metaphysics.ai shows wonderfully how current deepfakes fail when the person leans to the side. Therefore, a simple test to check video calls is to ask the person to turn their head by 90 degrees. Preferably in both directions.
Always check the transitions on the neck and hair. Are there artifacts or color differences to be seen here? Then it could be AI generated content. And if the teeth are unnaturally blurred and not sharply displayed, this is also an indication that the video was created artificially. Similar to a noticeably smooth skin without visible contours.
Real protection is only provided by a cryptographic process in which the source material can be clearly assigned to an identity. A review in media forensics is also conceivable. Methods from media forensics can determine if and where artifacts appear unnatural and are therefore a fake, i.e. a deepfake.
Here it can also be assumed that there will be more software for exams in the future. Or that deepfake analyses will one day become part of all corresponding systems. So in video calls is automatically checked whether it is a deepfake or not.
AI-generated content – a future outlook
This then brings us to the last section of our article, in which we would like to briefly explain the future once again. We may not be psychics, but technology is known to make great leaps. These, in turn, ensure that deepfakes become better and easier to implement. This in turn creates a realistic risk that we will all have to deal with deepfakes in the future because they may become part of our everyday lives.
Not questioning the video call from the boss could then become a potential security risk. CEO fraud has been a successful scam in the past. Not paying attention to whether they are really specific people could also be crucial in the media, especially political news. Just because something looks real doesn’t mean it is. We have known this for a long time, but still pay far too little attention to whether the corresponding images or videos have been manipulated.
So the trend will continue and it is important to take the issue of deepfakes seriously. Especially in companies. In the future, pay more attention to whether the person appearing in your digital environment is really who they say they are.