AI FOR GOOD AND BAD
DeepFakes — a Security Invasion or a Revolution (2/2)!
DeepFakes are the super realistic high-quality images and videos completely generated by Deep Learning (an AI stream) algorithms. This is the second in the two-part series of the DeepFake technology. In the first part, we learned an introduction about this tech and witnessed a sample of Obama’s deepfake to realize what can be achieved using this tech. We also found out how this technology can be used in an unintended manner to fool AI systems, launch sophisticated attacks, and bring social disturbance.
In this one, we will find out how this technology promises some really good results that will benefit multiple businesses as well as consumers at the same time.
The positive side of the DeepFake
App-based companies like TikTok and Snapchat are using this tech to lure their users and get addicted to using face swap features made possible by DeepFake technology. This is not limited till here. Let’s find out more.
Media companies will be the first one which will try to monetize the benign (positive) side of this tech on a large scale. It was first brought into the picture when Samsung released a paper featuring Mona Lisa painting, a famous old actress and a social figure, every one of them talking with face movements how we want them to. The video below will help you realize it better.
The Hollywood industry might be able to give re-birth to the famous 50s-60s actors and actresses and create new movies if this technology is refined even further. We already have a lot of photos and videos of these figures for training the algorithms. This will not only benefit the film industry but also spur the moments of joy when old age baby boomers will see their childhood portraits and heroes alive. Additionally, we can also see the fast and furious sequels with Paul Walker in them (I Hope . . .)!
Moreover, it will also revolutionize and personalize future advertisements. Multiple advertisement versions can be made with just the face of the famous figures (of course with permissions) in no time and very less cost of ad creation. They just need to make one ad and create multiple versions using local famous figures in different areas. This will make the ads more targeted and personalized by using the local language, dialect, and accent with the lip movements also synced with the same. In fact, an Indian politician (from Bihar), Manoj Tiwari, used DeepFake to reach to more population by translating his message in multiple languages. Though it could have been used in a negative way as well, there were no major repercussions perceived. (Source: MIT Technology Review, The Verge)
In the retail industry, it can help in creating hyper-targeted ad banners for people crossing nearby. It can take a few secs clip and show you the ad featuring yourself at a mall. Virtual try-ons are already a thing in countries like Korea; it will become more prevalent and cheap using this tech. Companies like ZeeKit, SenseMi, and Amazon have already started using it up to some extent for online retail. A screen recording from the ZeeKit is shown below.
Moving further this can also deeply impact the influencer market. An org can make an AI-generated face and gradually make it famous through online activities and then use the same face as an influencer. This is so cool; no need to pay a big amount to an influencer.
DeepFake can also be used in museums to resurrect the historical figures and in video games to personalize the experience when people interact with them.
Conclusively, I can say that with every great advancement, there comes an attached glitch of the negative side. It is our responsibility, how we develop the technology such that the defense mechanism is built inside the underlying technology itself so that no one could utilize it for negative purposes. For example, if there is a way how we embed some feature in the algorithm which enables the detection of a DeepFake video, the usage of free algorithms cab be controlled for negative purposes. This will limit the usage of this tech to general purposes like media and entertainment as we discussed above. As discussed in my last writing, big techs are already in the field to create such methods so we can hope that the fake detectors will be available soon for everyone’s use.