Generative AI is Great, Only if we Collectively Share Responsibility for it
Ricky Chopra
Advocate & Litigator | Chairman of RCIC | Building a society that connects innovative legal philosophies to enterprise
“Generative AI”, a phrase that’s taken our social media feeds, media channels and the rest of the internet by storm. Since LinkedIN’s launch of AI last year October, the conversation regarding “LinkedIN + AI” multiplied several folds and almost became synonymous to that of LinkedIN itself.
Just to put that into perspective, LinkedIn was launched in 2003 and it’s taken almost a couple of decades to reach the search volume that it has. But, despite high enthusiasm, AI is something we don’t quite comprehend yet.
Moral dilemmas have plagued us before, but none this big.
Last year a resonating headline hit us hard and it said - “Elon Musk sues Open AI”. At the end of last year yet another headline said “The New York Times sues Open AI”.
Now, for those who aren’t aware of what’s going on, in a nutshell - Elon Musk and The New York Times are suing OpenAI on grounds of using proprietary information and intellectual property to train AI models which are being used by the masses. Musk also added in his lawsuit that OpenAI launched as a not for profit and has turned itself into a Microsoft owned for profit.
But the reason I’m mentioning this is because it’s brought an old question back to the limelight in a new appearance - How do we moderate user generated content created through AI and who is liable for it?
In several trials held by the US Senate itself, Facebook along with its peers has undergone immense scrutiny for moderation in the age of social media. But putting the onus on the company establishing the platform has also resulted in a shadow monopoly over the moderation of user generated content. The same is very plausible when it comes to using emerging Generative AI technologies.
Standing today, we might feel like we have seen it all, but I’m assured that we haven’t. Here’s why.
“Generative AI” is not just a quick tool that helps you search for things faster or assists you with your weekly social media posts and reels. It’s actually much like an intern. A child that’s learning. Now imagine a 2-year old child who has the capacity to comprehend Socrates, write like Kafka and can perform mathematical calculations faster than Einstein and Newton combined when given enough power.
Now think about the fact that we all are using AI in one way or another and constantly giving this child a steady source of user generated content of all bytes, formats, and types. The possibilities are endless, but in all directions, positive and negative.
Here are two examples of why it’s concerning. Read carefully.
Example 1: Micro Scale
You’re a serial business entrepreneur who’s discovered and perfected a formula for entrepreneurship over the past 20 years. Your formula is your Intellectual Property (IP).
Since social media is taking a turn of freebies and you want to grow your social presence, you enter your formula into ChatGPT and ask it to shorten it and remove the most essential parts and make it catchy enough for people to contact you.
Tomorrow, one of our prospective clients uses ChatGPT and searches for “the perfect business formula”.
领英推荐
Your formula is what they get. Your 20 years of experience is disbursed in two minutes.
Example 2: Macro Scale
We live in a mix of competitive and oligopolistic markets where a few key players have control over a majority of resources but there is considerable pushback from small businesses who support these large conglomerates.
Hypothetically, let’s consider there’s a “Company A” in the technology market with only 2 other competitors.
Company A invests in AI and AI based systems by acquiring a firm which specializes in AI models. The first mover advantage Company A has in this case above its competitors is unfathomable.
The reasons are:
●?????? AI is faster than humans. So even a 1 month lead in AI systems means: 1 month worth of customer data processing historical and in real time.
●?????? Access to user generated content 24/7 for a month. Which translates to even more targeted methods of new customer acquisition and possibly stealing customers from competitors.
●?????? Faster data analysis leading to quicker new product developments that add a competitive edge.
Basically, using AI systems for a month gives Company A economies of scale in the blink of an eye.
The current litmus test is just a preview of what’s to come.
The lawsuit could test the emerging legal contours of generative A.I. technologies — so called for the text, images and other content they can create after learning from large data sets — and could carry major implications for the news industry. The Times is among a small number of outlets that have built successful business models from online journalism, but dozens of newspapers and magazines have been hobbled by readers’ migration to the internet.
Accessibility to resources and onus of responsibility have never needed to be redefined more urgently than now. But regulations cannot be shortsighted,?
Copyright issues are at the mere tip of the iceberg for AI. The larger picture spells danger and possible collapse of our current infrastructure of accessibility that has taken years to create, due to the threat of a monopoly on AI systems.
But that being said, the future is not grim only if we choose to make it so. Collaborative efforts between businesses, legal experts, policy makers, users and all other stakeholders is imperative for fair and equal accessibility to prevail.
Absolutely crucial discussion! As an IP law firm, we're deeply invested in navigating the legal complexities surrounding Generative AI. It's imperative for businesses to prioritize ethical leadership and shared responsibility in shaping AI's future. Let's foster meaningful debate and policy development to ensure a sustainable and ethical AI ecosystem.