Centre issues advisory to social media platforms over deepfakes

Centre issues advisory to social media platforms over deepfakes

In its advisory sent on Tuesday, the ministry of electronics and information technology has cited Section 66D of the IT Act as well as Rules 3(1)(b)(vii) and Rule 3(2)(b) of the IT Rules, and asked the intermediaries to take immediate action against such deep-fake photos, videos or any other such content.

The advisory also stressed the penalty — ?imprisonment of up to three years for perpetrators under Section 66D.

Legal provisions cited by the ministry:

1. Section 66 D of Information Technology Act - Punishment for cheating by personation by using computer resources: imprisonment of up to 3 years and a fine of up to Rs 1 lakh.

2. IT Intermediary Rule 3(1)(b)(vii) - Social media intermediary must observe due diligence including ensuring the rules and regulations, privacy policy or user agreement of the intermediary inform users not to host any content that impersonates another person.

3. IT Intermediary Rule 3(2)(b) - Intermediary shall, within 24 hours from the receipt of a complaint in relation to any content in the nature of impersonation in an electronic form, including artificially morphed images of such individual, take all measures to remove or disable access to such content.

Deepfake controversy in India

This advisory comes a day after a deepfake video of actor Rashmika Mandanna, seen entering an elevator, was debunked. The original video was uploaded on Instagram by a British-Indian girl Zara Patel, as reported by factcheck journalist Abhishek Kumar.

The deep-fake video drew sharp responses from several Bollywood actors who said that such creations were a matter of grave concern. The minister of state for electronics and information technology Rajeev Chandrasekhar also warned that social media intermediaries must take down such content promptly or face strict action.

It is not just an isolated incident. Deep-fake technology has garnered attention in recent years due to its potential to manipulate digital content, blurring the boundaries of reality and posing serious implications. By employing artificial intelligence and machine learning techniques, malicious actors can create fabricated audio, video, or images, making it increasingly challenging to distinguish between authentic and manipulated content.

Given the rising concerns surrounding deep-fake's potential to deceive and mislead, it is commendable that the government is proactively addressing this issue. Social media and internet intermediaries play a pivotal role in managing the dissemination of information and fostering a safe online environment. Their cooperation and compliance with the government's advisory will be pivotal in mitigating the risks associated with deep-fake content.



要查看或添加评论,请登录

Law ko Jaano的更多文章

社区洞察

其他会员也浏览了