‘Photo and Video Manipulation Artist’ – This is how the profile ‘Ash Fan’ (@crazyashfan) on the X platform has described himself. But it is shocking to know what that ‘artwork’ is. This ‘artist’ finds pornographic images from various porn websites and uses artificial intelligence (AI) to edit and add the faces of leading actresses and celebrities to them.
This account has posted fake pornographic videos of Bollywood actresses like Aishwarya Rai, Deepika Padukone, Alia Bhatt, Kajol, Nayanthara, Kiara Advani and others. Deepfake videos all made with AI. It started publishing deepfake videos of actresses from September. There were 39 videos. This account was deleted after the incident became a national issue.
∙ You fake!
After the fake video of actress Rashmika Mandana went viral, the ‘deepfake’ videos have been discussed again. The video shows a woman dressed in black getting into an elevator. But the woman’s face has been morphed and edited to resemble Rashmika. At first glance it is difficult to detect the editing, but if you look closely, you can see that the woman’s face changes to Rashmika’s just as she enters the lift.
The real person in the video is social media star Sara Patel. Instead of Sara’s face, Rashmika’s face was added using AI. After this became widely known, many names came out in support of the actress. Rashmika’s response was that it was extremely scary. Sara Patel’s response was that she had no part in it and was worried about the future of girls who share pictures and videos on social media.
As the incident became controversial, the central government directed the social media to take strict action. As per the IT Rules 2021, it is suggested that deepfake videos should be removed within 36 hours if someone brings them to their attention. Union Minister of State for IT Rajeev Chandrasekhar said that if this is not followed, the safe harbor protection enjoyed by social media platforms will be completely lost. The Safe Harbor provision of the IT Act, 2000 prevents platform company representatives from being held responsible for content. If this is missed, the company representatives will face action for the content. This is the second time in 6 months that the Center is giving guidance on this matter
A deepfake picture of actress Katrina Kaif was also circulated on social media yesterday. The shooting scene of the movie Tiger 3 was used for this.
∙ If you ask, you will get
An investigation into Rashmika Mandana’s fake video led to the ‘deepfake’ network on X and Telegram. All the four accounts that followed the @crazyashfan account were sharing such deepfake videos. These are done by editing the faces of Indian actresses in porn videos.
Along with the posts on the X platform, they also share the link to the Telegram channel. There’s even a ‘bot’ system that gets deepfake videos on demand once we’re on the Telegram channel. There are many websites that work like this. It has many requests for ‘nude pictures’ of actresses. Some websites also have a system to create AI images by uploading the images to the user himself.
A few days ago, girls at a high school in the US complained that their male classmates had created pornography using AI. There are also complaints that they shared these pictures in social media groups. Although the incident is under investigation, this proves that all it takes is a phone and an AI tool to create deepfake images and videos. Hani Farid, a professor at the University of California, Berkeley, who has done research in digital forensics and image analysis, says that in the past, a deepfake image was made using hundreds of images.
“With the boom in chat, GPT and other AI software, human interaction is minimal. Everything is done by AI.”- says Malavika Rajkumar, a lawyer at IT for Change, a Bengaluru-based NGO. “Deepfakes are an invasion of privacy. But even the victim does not know that their rights are being violated. 96% of deepfake videos on the internet are porn videos. The police have the technology to track the accounts that post these. But how can we control the AI tools that create them?” – asks Malavika.
∙ Not morphing
Deepfaking isn’t the old formula of taking someone’s video and changing its voice or morphing it by cutting off its head. Deepfake is done by examining all the available videos and footage of a person in detail, even studying the muscle movements of his face, and preparing the video with the way he speaks, his voice and body language. Artificial Intelligence is behind this. We can make videos that sound like we’re actually saying things we don’t say, and sound like we’re singing or singing.
In June 2019, a video appeared on Instagram of its owner (and Facebook’s) Mark Zuckerberg. The allegation that Facebook sold the personal information of its users was a big controversy. In the Instagram video, Zuckerberg talks about, among other things, “I control the future with billions of people’s personal information in my hands.” In the video, Zuckerberg can be seen and heard. However, the fact is that Zuckerberg has not given such a video message.
Zuckerberg’s ‘deepfake’ was created by artists Bill Poster and Daniel Howe. They did it as an art project. Some deepfake videos of US House of Representatives Speaker Nancy Pelosi have surfaced on Facebook. Despite pointing this out, Facebook was not ready to remove it. Now the owner of Facebook himself got a taste of it!
Deepfake is not a new technique. There have been deepfakes before. Academic research on this has been going on for a long time. and efforts by individuals on the Internet. This technique is very useful in fields like cinema. The deepfake video of US President Barack Obama in 2017 was very controversial. US President Donald Trump, German Chancellor Angela Merkel and Argentina’s Mauricio Macri are victims of this in politics. One can only imagine the dangers these forgeries can cause to make it seem as believable as the unspoken. Added to this is the tragedy of using this technology to create porn videos and the like. In 2017, Deepfake became widely discussed when the footage of some famous Hollywood stars came out.
∙ Be careful
Deepfake is one thing that may become a big threat in the coming days. Other technologies to identify such fake videos are being researched in many parts of the world. There are some simple techniques like looking carefully into the eyes of the person in the video. Researchers have found that people who deepfake don’t blink like normal people do, and their eyes stay open longer. But the challenge is that technology is growing to overcome all this. Along with that, the only hope is that counterfeit detection technology will also grow.
∙ How to recognize?
Many techniques are now available to detect whether photographs are fake or AI-assisted. For example, if you upload a picture on the optic AI or Not website, it will evaluate whether it is created by AI or not. Similarly, sites like illuminarty.ai and fotoforencics.com will also evaluate images. These are the ones that we all can easily use.
In the case of videos, systems that can be easily tested are just coming up. Microsoft’s Video Authenticator is one such. However, it requires some technical skills to use it. AI technology is growing all the time. The relief is that mechanisms are also growing to deal with the resulting dangers.
Common people like us can try some simple techniques:
∙ If you see a suspicious video, you can search whether any information about it is available on the Internet.
∙ Check whether the news or information about the things mentioned in it has come in reliable media.
∙ Let’s see if there is anything unusual in the video.
∙ Look carefully into the eyes of the people you see in it. Researchers have found that people who deep-fake don’t squint like normal people do, and keep their eyes open longer.
∙ Look for abnormalities in facial muscles and body movements.
∙ Note the abnormality in color and lighting.
But there is one thing, AI techniques are being innovated in such a way that we are going to surpass these observations of ours!