ARE WE MOVING TOWARDS SAFER AI SYSTEMS? (Part 1)
We are already in the AI era where generative AI, enterprise AI and other forms of applied AI are transforming the world around us.
From business, finance, entertainment, and education, to marketing, art and content creation, AI has changed our approach.
·??????In business, we have enterprise AI that is making business models more productive and efficient with automated workflows and smart systems.
·??????Companies are leveraging the power of LLMs to introduce AI-based products and services in the market.
·??????Today, we have AI chatbots like Perplexity, Claude 2, ChatGPT and more. These LLMs are very powerful with advanced capabilities that can understand and respond to human conversations and carry out natural language tasks like writing essays, blogs, and poems, even making content and marketing strategies, doing translations and much more.
·??????Then we have AI based video editing, image editing and photo generation tools like Midjourney.
·??????Further, the market is witnessing a surge in audio generation tool as well. Recently Meta launched AudioCraft, a tool that let’s you generate high quality audio and music from text.
·??????Last month we also saw the release of air.ai, an AI sales and customer support tool. It is capable of having 10-40 mins long calls that sound like real humans. The AI tool acts like your sales and customer agent without any training and can work across 5,000 applications.
·??????One of the LLM based app that is making news in India is Jugalbandi Chatbot. The ChatGPT powered chatbot is popular in rural areas as the universal translator. The chatbot takes your query in the regional or local language, retrieves answers from English sources and gives answers back to the users in their local language.
The market is loaded with AI apps, platforms and tools that are changing our professional and personal lives.
On one hand, we are witnessing a revolutionary technological advancement of the century. While on the other hand, these developments come with greater risks.
The Risks
The concerns about biased training data resulting in prejudiced outcomes, deepfakes, misinformations, privacy issues, cyber threats and ethical ambiguity are worth considering.
The rise of AI-powered tools has ushered in a new era of concern, as they empower malicious individuals to craft synthetic entities virtually indistinguishable from real humans in online spaces, spanning speech, text, and video mediums.
These bad actors exploit this technology to engage in various deceptive practices, either by camouflaging their true identity or by assuming the guise of others.
领英推荐
The consequences are far-reaching, as they can unleash a multitude of revamped adversities such as spreading misinformation and disinformation, orchestrating security breaches, executing fraud, propagating hate speech, and facilitating online shaming.
For instance, a fabricated image depicting the Pentagon in flames sent shockwaves through equity markets in the U.S., highlighting the potential economic impacts of AI-generated falsehoods.
On social media platforms like Twitter and Instagram, fake users with manufactured personas disseminate extreme political viewpoints, their posts garnering millions of shares and deepening the fractures in online political discourse.
Even the domain of cybersecurity is not immune. Cloned AI voices ingeniously bypass security measures, enabling unauthorized access to sensitive information, like a case where a bank customer's authentication was compromised.
Moreover, the ramifications extend to critical democratic processes, as evidenced by instances such as a tragic suicide allegedly linked to conversations with an AI-powered language model and AI-generated deep fakes casting shadows over recent elections in Turkey.
With upcoming elections in the U.S., India, the EU, the U.K., and Indonesia, the menace of bad actors exploiting Generative AI for misinformation and election manipulation looms larger than ever, posing a growing threat to the integrity of democratic systems worldwide.
The Coming of AI Regulations
Regulating agencies across the world are worried about the misuse of AI models. So, governments are coming up with AI regulations to address these concerns without hindering the developments. These regulations aim to make AI systems safer for humanity. They encourage a regulated AI development environment where innovation thrives without any harmful consequences.
China was one of the first countries to come up with detailed AI regulations. The Chinese AI regulations are directed toward algorithm recommendations, training data disclosure, avoiding deepfakes and generative AI management.
The next in the race are European Union and United Kingdom AI regulations. Though the European Union has been focusing on AI regulations and collaborative developments since 2018, it's constantly changing its AI strategy as per the rapidly advancing AI landscape.
UK released its “Pro-Innovation Approach To AI Regulation” in March 2023. As per the whitepaper, UK AI regulations prioritize safety, security, transparency, accountability, and fairness.
From the first look, each of these AI regulations seems beneficial for the tech world and society. However, when we delve deeper we understand that these AI regulations are biased in the interest of the present governments of the respective countries.
For example, China's AI regulations are more about information control and inclined towards the interest of the government. With their algorithm registry, the government requires developers to disclose their training data and how their models are trained. While you think it’s a step towards eradicating biased training data that results in prejudiced outcomes and misleading information, on a deeper level it is more about ensuring that training data does not contradict its present government and its decisions.
In the coming weeks, my articles will decode the AI regulations put in place by China, the UK and the EU.
Stay tuned for my next articles in the series “Are We Moving Towards Safer AI Systems?” to get a better understanding of what these AI rules propagate and do we really need them.