Opinion | Public Policy Can Force AI Platforms to Nix Deepfakes Before their Creation
Opinion | Public Policy Can Force AI Platforms to Nix Deepfakes Before their Creation
If an AI algorithm scrapes data from a platform, the platform should have the regulator-mandated fiduciary duty to restore the data to the user. Platforms cannot share user data with AI algorithms, whether internal or external, that are misusing the data for purposes other than what the data is given for
K Yatish Rajawat and Dev Chandrasekhar
?
Cyber fraud has evolved dramatically in recent years, shifting from simple email deceptions to incorporating sophisticated GenAI technologies to create deepfakes that can recreate the face, body movement, voice, and accent of the real thing. Lawsuits are being filed world-wide to block or bring down these deepfakes under copyright infringement laws. But using the courts to prevent this misuse is not enough; the state needs to enforce policy at the AI level.
Recently, a multinational company’s CFO was impersonated using deepfake technology in a video call. This AI-generated facade was employed to authorise the transfer of a significant sum, totalling nearly $25 million, into multiple local bank accounts. Initially suspicious, the employee was convinced after a video chat with what appeared to be her CFO and several co-workers, showcasing the dangerous persuasive power of deep-fakes to commit financial crimes.
Until recently, from synthesised music to edited images and voice assistants, such innovations, while artificial, often still bore the human touch — a critical aspect when ownership and responsibility have to be assigned for the synthetic output. Moreover, every element of enhancement was tedious and expensive and had human interfaces.
However, generative AI (Gen AI) dramatically shifts this landscape, bypassing human touch and creating works almost entirely by software—the deepfake. These convincing falsifications are propelled by advanced AI tools, many accessible on the Internet, that blend machine learning and neural networks.?
Deepfakes are generally fought on grounds of copyright and intellectual property infringement, or as a financial crime if it involves a financial transaction and digital impersonation.?
Since such reactions are generally post facto, the harms caused by these deepfakes on reputational, financial, or societal levels have already been committed; they don’t prevent or reduce the number of deepfakes. Nor is there any deterrence to the AI platforms used as tools to create these deepfakes. Most infringement are limited to taking these deepfakes down on social media.
Detecting the criminal is difficult as most AI tools do not authenticate users. In a way, it's like gun control. Access to these tools is available over the internet, most of the AI tools are free as they want users data and are not charging them yet. Its as if anyone and everybody has access to a guns; there is no need to buy the gun or the ammunition. The only way to control deepfakes is to monitor control to the AI engine and access to the data, similar to the guns and access to the ammunition. ?
领英推荐
?
Guns and ammunition: AI platforms and data
One way to deter deepfakes is to make the platform the co-accused in every case of proven deepfakes. Second, public policy must make it compulsory for AI platforms to authenticate their users, and keep a record of them. Their created output has to have watermarks, and their digital and real identities need to be available for access to the law. This is already required for social media platforms accessed in India; they are also asked to have a nodal compliance officer. A similar kind of obligation should be required of all AI platforms.
It’s more important to control the data (ammunition) through ‘Right to Data’ laws to ensure that individuals can own their data. At the basic level, a video or an image is data; an AI engine has to scrape several thousand images to create a near-exact replica for a deep fake. If its ownership is clear, scraping data off the public internet will be discouraged; this may control deep fakes at the creation stage itself.
Under India’s Data Protection Act, however, the right to data ownership is not clearly defined. Currently, platforms own their users’ data; AI engines are free to scrape and use that data to build their models.?
Until the ownership of the data is not decided by the individuals who have created it, legal rights of privacy or data protection cannot be fully applied. Ownership, distinguished from privacy and protection, has to be defined first — not only in the context of data linked directly to identity like in the case of privacy, but also ‘indirect’ digital footprints across the Net — images, video, and text created by any action of a user.?
Privacy laws only define data that needs to be private it does not cover all data. In the case of AI platforms all personal data and even non-personal data is important. The data may be non-personal but it can be combined with personal data to digitally impersonate users. The impersonation engine needs enough data to create videos like the one used for impersonating the CFO of the company in a video call.??
?
Users, not the platform, should define data ownership
The next step is to define consent for data use. The law for data fiduciaries and account aggregators clearly defines consent to use financial data. Why is such a definition of consent not applied to video data created and presented on social media? The default assumption should not be that consent is taken by the platform, and the user should not be forced to give consent as a sign-up default in the massive list of “terms and conditions” that nobody reads. The default has to be that without consent, has to be explicit, and? no data can be used by the platform, or if it is stolen or scraped by an AI, the fiduciary responsibility to maintain its sanctity should be of the platform.
If an AI algorithm scrapes data from a platform, the platform should have the regulator-mandated fiduciary duty to restore the data to the user. Platforms cannot share user data with AI algorithms, whether internal or external, that are misusing the data for purposes other than what the data is given for.??
Data, whether obtained by hook or by crook, is GenAI’s ammunition for its gun. If the legal rights for data are recognised and given to individuals, their agency over the data will be established ex-ante. The misuse of deepfakes will be countered at the creation stage itself.
K Yatish Rajawat and D. Chandrashekar work at a Gurgaon-based think and do tank, Centre for Innovation in Public Policy. The views expressed in the above piece are personal and solely those of the authors. They do not necessarily reflect Firstpost’s views.
K Yatish Rajawat, the idea of assigning clear ownership of data to individuals is crucial in controlling how AI algorithms use our data. Implementing regulator-mandated fiduciary duties for platforms to restore user data when misused could protect our virtual identities. This approach could prevent AI from exploiting data for unintended purposes and address concerns about deepfakes and data privacy. How do you see this shaping the future of AI and data regulation? #ai
AI Growth Marketer @ ZuAI
5 个月interesting perspective. ownership of data is crucial to prevent misuse by ai algorithms. thoughts? K Yatish Rajawat
I turn ideas into societal impact.
5 个月https://www.news18.com/opinion/opinion-public-policy-can-force-ai-platforms-to-nix-deepfakes-before-their-creation-8902316.html
Nowhere guy | author of #YOGAi | designing from the emerging present | founder ideafarms.com | white light synthesiser | harnessing exponentials | design-in-tech and #AI advisor
5 个月#Deepfakes - use-case for machine readable standards. cc: Doc Searls Lisa LeVasseur IEEE Standards Association | IEEE SA S. K. Ramesh, Ph.d., IEEE Fellow John C. Havens