AI FOR GOOD AND BAD
DeepFakes — a Security Invasion or a Revolution (1/2)!
DeepFakes are the super realistic high-quality images and videos completely generated by Deep Learning (an AI stream) algorithms. This is controversial since there have been few cases where these artificially created videos or artificially copied voices have been used for unlawful purposes. You can just type ‘deepfake’ in google images and you will know how prevalent it is. Or let me first show you a sample of this technology, after which you will know what I am talking about (Let the YouTube Ad play if you are watching it on LinkedIn Mobile :P).
TikTok’s parent company Byte-dance has built an unreleased feature, they call as face-swap, using the deepfake technology. The user will be able to put her own face into an already existing celebrity’s video giving the impression that the user is actually acting or singing in that video. Snapchat has recently bought a Ukrainian company — AI factory, which specializes in such computer vision products.
I know, you will say that we have already seen features like replacing the face in a video with some other face, a long time ago. But, I am talking about making that newly replaced face create expressions and voice modulations as you expect them to create, live on a digital platform. This is something that is achieved by deepfake. For example, to create Obama’s deepfake as in the above example, a similar built person is brought to talk in front of the camera. His face mappings are picked up and mapped to that of Obama's picture/video to create an impression that Obama is talking and making that facial expression. Even Obama’s voice is generated by training the ‘voice spoofing’ neural network with hours of Obama's speeches.
This might have very serious consequences when negatively used in pornography and politics. In pornography, all you need to do is find a porn actor or actress who is similarly built as the target celebrity, and the rest of the work is done by the algorithm. Similarly, politicians are an easy target of this tech since most of the politician’s videos are either standing in a stable position or sitting for an interview. This gives a great opportunity for clean data. All, you need to do is get a person talking the content you want the politician to talk, and the rest things are taken care of by the trained algorithm. The interesting part is, these algorithms are freely available on the internet. Below is a very high-level representation of the functionality of this algorithm.
Another great example just to give you a sense of what can be achieved can be seen in the below gif which I found in a Medium article.
Deepfakes are just one of the malignant faces of AI algorithms. AI systems can be fooled or they can be used to create sophisticated attacks as well.
Fooling AI systems
If the source of data for training the AI systems is found, the hacker can infuse its own data to create the system biased in a manner required by the hacker. This can be very dangerous as this can go completely undetected. The phenomenon is called data poisoning. Corrupting the training data can directly impact how the algorithm classifies the objects. Small disturbances can be created in an image that is significant for the algorithm but can go undetected from a human eye. This can completely shatter the intention of training the algorithm.
AI used for sophisticated attacks
According to a Wall Street Journal report, the CEO of a UK based energy firm was scammed to draw approx $243,000. This was done using the ‘voice spoofing’ (a deepfake sister tech). The scammer created a system trained with the voice of the CEO’s boss (from the parent company) and asked to transfer the amount to a vendor's bank account. It was the second time when the scammer requested the money, raised the eyebrows of the CEO. But the first amount was already sent to a Mexican bank account and then distributed to multiple other accounts at different locations.
Moreover, a new concept of AI Trojans has recently popped up. IBM has developed something called, DeepLocker, which sits inside an AI system built for some corporate or customer usage. The malware inside the system only activates if it finds a specific trigger such as a specific face, voice, etc. Though we have not seen any such cases of attack in the real world, it will be really difficult to detect in case one comes up.
Any big leap in the development of humanity always comes with its own consequences. However, we can still fight the consequences and maintain harmony and balance. Organizations like Facebook, Microsoft, and other big techs in the industry have outsourced a lot of such fake data to train algorithms to detect and fight deepfakes. Multiple challenges have also been launched to create AI algorithms that detect deepfakes created by AI i.e. ‘an AI to fight with AI’. Researchers have already started coming up with ideas and papers to combat this negative side of the technology.
Along with the malignant side of this technology, we have some benign side as well. It can completely revolutionize the personalization, marketing, corporate training, and entertainment industry. We will see that “how” in my next writing!