AI FOR GOOD AND BAD

DeepFakes — a Security Invasion or a Revolution (1/2)!

The negative side of the AI revolution

DeepFakes are the super realistic high-quality images and videos completely generated by Deep Learning (an AI stream) algorithms. This is controversial since there have been few cases where these artificially created videos or artificially copied voices have been used for unlawful purposes. You can just type ‘deepfake’ in google images and you will know how prevalent it is. Or let me first show you a sample of this technology, after which you will know what I am talking about (Let the YouTube Ad play if you are watching it on LinkedIn Mobile :P).

TikTok’s parent company Byte-dance has built an unreleased feature, they call as face-swap, using the deepfake technology. The user will be able to put her own face into an already existing celebrity’s video giving the impression that the user is actually acting or singing in that video. Snapchat has recently bought a Ukrainian company — AI factory, which specializes in such computer vision products.

I know, you will say that we have already seen features like replacing the face in a video with some other face, a long time ago. But, I am talking about making that newly replaced face create expressions and voice modulations as you expect them to create, live on a digital platform. This is something that is achieved by deepfake. For example, to create Obama’s deepfake as in the above example, a similar built person is brought to talk in front of the camera. His face mappings are picked up and mapped to that of Obama's picture/video to create an impression that Obama is talking and making that facial expression. Even Obama’s voice is generated by training the ‘voice spoofing’ neural network with hours of Obama's speeches.

This might have very serious consequences when negatively used in pornography and politics. In pornography, all you need to do is find a porn actor or actress who is similarly built as the target celebrity, and the rest of the work is done by the algorithm. Similarly, politicians are an easy target of this tech since most of the politician’s videos are either standing in a stable position or sitting for an interview. This gives a great opportunity for clean data. All, you need to do is get a person talking the content you want the politician to talk, and the rest things are taken care of by the trained algorithm. The interesting part is, these algorithms are freely available on the internet. Below is a very high-level representation of the functionality of this algorithm.

Very Highlevel working of algorithm of a deepfake algorith for obama’s deepfake
Very Highlevel working of algorithm of a deepfake algorith for obama’s deepfake

Another great example just to give you a sense of what can be achieved can be seen in the below gif which I found in a Medium article.

a deepfake gif where president trump’s facial expressions are copied by multiple game of thrones character.
a deepfake gif where president trump’s facial expressions are copied by multiple game of thrones character.

Deepfakes are just one of the malignant faces of AI algorithms. AI systems can be fooled or they can be used to create sophisticated attacks as well.

Fooling AI systems

If the source of data for training the AI systems is found, the hacker can infuse its own data to create the system biased in a manner required by the hacker. This can be very dangerous as this can go completely undetected. The phenomenon is called data poisoning. Corrupting the training data can directly impact how the algorithm classifies the objects. Small disturbances can be created in an image that is significant for the algorithm but can go undetected from a human eye. This can completely shatter the intention of training the algorithm.

AI used for sophisticated attacks

According to a Wall Street Journal report, the CEO of a UK based energy firm was scammed to draw approx $243,000. This was done using the ‘voice spoofing’ (a deepfake sister tech). The scammer created a system trained with the voice of the CEO’s boss (from the parent company) and asked to transfer the amount to a vendor's bank account. It was the second time when the scammer requested the money, raised the eyebrows of the CEO. But the first amount was already sent to a Mexican bank account and then distributed to multiple other accounts at different locations.

Moreover, a new concept of AI Trojans has recently popped up. IBM has developed something called, DeepLocker, which sits inside an AI system built for some corporate or customer usage. The malware inside the system only activates if it finds a specific trigger such as a specific face, voice, etc. Though we have not seen any such cases of attack in the real world, it will be really difficult to detect in case one comes up.

building of a DeepLocker
building of a DeepLocker

Combating DeepFakes

Any big leap in the development of humanity always comes with its own consequences. However, we can still fight the consequences and maintain harmony and balance. Organizations like Facebook, Microsoft, and other big techs in the industry have outsourced a lot of such fake data to train algorithms to detect and fight deepfakes. Multiple challenges have also been launched to create AI algorithms that detect deepfakes created by AI i.e. ‘an AI to fight with AI’. Researchers have already started coming up with ideas and papers to combat this negative side of the technology.

Along with the malignant side of this technology, we have some benign side as well. It can completely revolutionize the personalization, marketing, corporate training, and entertainment industry. We will see that “how” in my next writing!

Product Enthusiast — Utilizing the power of AI and Design to rethink possibilities and reframe the problem statement! Website: www.hellodeepaksingh.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store