From the ability to change one’s skin tone to adding animal ears, face editing and manipulating technologies are getting more and more sophisticated; today some can even create life-like false videos. Now available to the masses, such digital content creating mechanization has long been part of the entertainment industry. But the unprecedented danger simmering beneath what was created to be nothing more than fun, following the trend, deepfake have become a serious danger today.
Experts fear that, with continual up gradation of AI, separating the real videos from the false ones could get next to impossible. But what is deepfake? And why something as fun as face swap is believed to be the most dangerous crime of the coming future?
What is Deepfake?
Deepfake made of words combing “fake” and “deep learning” is an artificial intelligence technology. In laymen’s terms, falsified videos created with the help of deep learning, a subset of AI referring to an arrangement of algorithms capable of learning and making decisions of its own.
By producing a persuasive counterfeit by keenly studying a targeted person’s picture or video (broken in thousand frames) from every angle, the technology mimics the speech pattern and behavior of the person. Post creating once primary fake, through generative adversarial networks, GANs, deep learning makes the video more life-like.
Benjamin Guedj, a machine learning research told Aljazeera, “(Deepfake) is used to impersonate people into saying thing they never originally said, or act is a way they never acted before.”
With dozens of face-swapping free platforms available today, Deepfake technology is now accessible to the masses. Though most of the platforms fail to make the videos look much real, tech-professionals can easily make the videos life-like in a day or two, and the process is getting easier with ever-improving technology.
What Makes False Videos So Dangerous?
There are numerous useful, creative, and completely harmless applications, along with slightly weirder uses of this technology too like creating false videos of dearly departed for comfort. In such cases, Deepfake is a completely normal technology, but the problem begins when it is made without the targeted person’s consent, and in almost 90% of cases women are the target.
The use of Deepfake in pornography, with attaching celebrities or common women’s face with porn actors for defaming and have become a commonplace occurrence. According to a study conducted in Deep trace, Netherland, an astonishing 96% of all the Deepfakes online were non-consensual porn. Creating a huge mental and social impact on the victim.
The use of the Deepfake goes deeper than pornography, financial scammers are using this newest AI technology for conducting their fallacious activates, for example impersonating a user’s voice. Villasenor, an electrical engineering professional, University of California, Los Angeles aid to CNBC “(Deepfake) can be used to undermine the reputation of a political candidate by making the candidate appear to say or do things that never actually occurred.”
Deepfake Fuelling Misinformation
Dangers associated with Deepfake are real and are adding more to misinformation in the world which is already facing difficulty in differentiating the real information from the fake ones. From dismissing actual facts as false to thriving conspiracy theories, and some of the most powerful faces running disinformation campaigns. But, the Deepfake technology takes the flood of misinformation to another level. Deepfake creates what most of the other misinformation does, a foggy cloud of doubt to the point where the actual reality could be misunderstood as being fake.
From scary-good falsified videos that have gone viral to some severe fake political videos, the danger is simmering. The technology behind Deepfake is getting better, while the new Deepfake applications in the market are making it easier to access by masses
How To Spot Deepfakes?
Some of the small details that could help us spot a Deepfake are:
Blurring around the hair line and ear
Difference in resolution
As said before, this technology is getting more life-like with every upgrade, differentiating real and fake video could become nearly impossible. As an implication, video and audio evidence could get completely off the table in the courts. Some of the top tech companies like Microsoft and Facebook are working on automated software to flag Deepfakes while top organizations are looking into methods to tackle Deepfake as a threat.
Good technology falling in the wrong hands could cause severe damages, so we as users need to be more aware of the threat luring out, and should inspect every suspicious information before believing them.