AI-Generated Texts and the Legal Landscape: A Technical Perspective
Abhivardhan ?
Technology Law & AI Governance Specialist | Founder, Indic Pacific and Chairperson, Indian Society of Artificial Intelligence and Law
Greetings. In this issue of the Visual Legal Analytica newsletter by Indic Pacific Legal Research LLP , I am glad to feature a technical perspective on the legality of AI-generated texts.
A complete insight on the technical issue is also available at Visual Legal Analytica blog at vla.digital.
Here's a reality check on the legality of AI-generated texts.
There is nothing unreasonable on the part of any technical institution to discredit text-based outputs by ChatGPT or Bard.
NYT's case against OpenAI also kind of makes sense.
However, a better way to address the legality is this:
For example, if the Indian Foreign Service one day decides to use text-generating AI in diplomatic and consular service communication, they just have to work on specific semantic and grammar-based protocols they accept in terms of text-responses and prompts. This will go far in having a clean slate and can be privacy-friendly too.
Let's take a quick example - someone who is running a Substack or a Medium account as compared to a legacy media / publishing platform is prone to suffer even further when their content is scraped by AI systems like GPT 4 and other LLMs.
The best way this works is that content protection practices must be decided prioritising non-legacy media and publishers (and content creators) in consensus. Yes, content restrictions must be implemented to ensure verbatim content is not scraped through ChatGPT. But at the same time, if OpenAI cannot be trusted - the best way forward for any publisher is to enforce Open Source Standards on Data scraping techniques and human-in-the-loop grammatical protocols. These two ways could be the best way to ensure content provenance.
Here are some other insights, which may interest you.
The French, Italian and German Compromise on Foundation Models of GenAI
Almost every economy around the world is trying to curate a model that would put guardrails on the AI technology that is rapidly developing all around the world and is being increasingly used by people. Like other economies, the European Union (EU) is trying to lead the world in developing AI technology and in coming up with an efficient and effective way to regulate it. In mid-2023, the EU passed one of the?first major laws to regulate AI?which was a model that would aid policymakers. The European Parliament, for example, had passed a draft law, the EU AI Act, which would impose restrictions on the technology’s riskiest uses.
Unlike the United States, which has taken up the challenge to create such a model quite recently, the EU has been trying to do so for more than two years. They took it up with greater urgency after the ?release of ChatGPT in 2022.
On 18 November 2023, Germany, France, and Italy reached an important pact on AI regulation and released a joint non-paper that countered some basic approaches undertaken by the EU AI Act. They suggested alternate approaches that they claim would be more feasible and efficient. The joint non-paper underlines that the AI Act must aim to regulate the application of AI and not the technology itself because innate risks lie in the former and not in the latter.
Read the complete insight at https://www.indicpacific.com/post/the-french-italian-and-german-compromise-on-foundation-models-of-genai
OpenAI's Qualia-Type 'AGI' and Cybersecurity Dilemmas
OpenAI CEO Sam Altman was fired from its Board of Directors for a short spell in November 2023. Along with him, another member of the Board, Grog Brockman, was also fired. Both the spokespersons of OpenAI and these two people refused to provide any reasons for this when they were reached out to. However, it came to light that several researchers and staff of OpenAI had written a letter to the Board, before the firing, warning of a powerful artificial intelligence discovery that they said could threaten humanity.
OpenAI was initially created as a non-profit organisation whose mission was “to ensure that artificial general intelligence benefits all of humanity.”[1] Later, in 2019, it opened up a for-profit branch. This was a cause of concern because it was anticipated that this for-profit wing will dilute the original mission of OpenAI to develop AI for the benefit of humanity and will rather act for profit, which can often lead to non-adherence to ethical growth of the technology. Sam Altman and Grog Bockman were in favour of strengthening this wing while the other 4 Board members were against giving too much share power to it and instead, wanted to stick to developing AI for human benefit rather than to achieve business goals.
Read the complete insight at https://www.indicpacific.com/post/openai-s-qualia-type-agi-and-cybersecurity-dilemmas
The Policy Purpose of a Multipolar Agenda for India, First Edition, 2023
This is Infographic Report IPLR-IG-001, in which we have addressed the concept and phenomenon of multipolarity in the context of India’s geopolitical and policy realities per se.
You can also purchase the report at the VLiGTA App.
Here is a sneak peek of the report.
Read more such insights at vla.digital and indian.substack.com.
AI Educator | Learn AI Easily With Your Friendly Guide | Built a 100K+ AI Community for AI Enthusiasts ( AI | ChatGPT | Tech | Career Coach | Marketing Pro)
9 个月Can’t wait to read it!
CEO @ VentCube - Google Ads & SEO Strategist | Driving Business Growth Through Data-Driven Marketing Strategies
9 个月Can't wait to read it! ??
Sr Tech Lead | Engineering | Automation
9 个月Complete bans are enenforceable and not practical most times.