Multimodals the Future of AI: Bias Reduction and Cost Mitigation Using AI Multimodals
CV: www.paulclaxton.io

Multimodals the Future of AI: Bias Reduction and Cost Mitigation Using AI Multimodals

OPENING

Much of the world has been swayed and butterflied with the last several months of Chat GPT, but this revolution is nothing new. Google has been using language models in text and speech for quite some time. So applications like Chat GPT are nothing new, it is just revolutionary how they have engineered and commercialized it. What would be revolutionary and fixing is our adaptation of multimodals more aggressively. I am not haphazardly a proponent of AI, I am a proponent of AI in its fullest sensing capability, and only for good. The way Chat GPT is being commercialized and the way AI is being democratized has caused some serious concerns on my end. This is why I became a Venture Capitalist; to lead the fiduciary effort for what gets backed in AI and what does not.

BIAS REDUCTION

Bias reduction and effectiveness of AI all boil down to some aspect of psychology, called sentiment analysis. ?And I love psychology, so much that I minored in it during my Bachelor of Science completion.

So when talking about artificial intelligence, I try to keep a sense of basic psychology and practicality that all boils down to how AI deals with sentiments of data. This means how it perceives data subliminally, through the senses, and through psychoanalysis.

The sentiment of artificial intelligence is likely one of the most important contributing factors to artificial intelligence in its current state of progress today. But sentiment analysis does not have to be limited to just text. It can apply to facial expressions derived from computer vision.

?The success of sentiment analysis today is largely because text-based artificial intelligence has been a significant area of development for artificial intelligence (AI), but it is not the only area of development. AI systems today include everything from image and speech recognition, natural language understanding, robotics, computer vision, recommendation systems, and more.

Text-based AI has gained much of its success mainly due to advancements in natural language processing (NLP) and machine learning (ML), which have spawned and facilitated systems like chatbots, virtual assistants, language translation, sentiment analysis, and text generation models.

Text-based AI has seen significant advancements and applications, but AI as a whole encompasses a wide range of modalities and continues to progress in various domains, incorporating multiple types of data inputs and outputs.

MULTIMODALS

This is where we begin to move from large language models (LLMs) to multimodal models.

The integration of LLMs into multimodal models is essential in order to help AI increase it propensity for processing of data and outputs.

It is not only essential to these things but if AI is to be democratized it is important that we are able to conduct enough data to train the models more cost-effectively. A multimodal AI can interpret a multitude of several datasets to significantly mitigate bias and create a more accurate result.

For example, in Figure1 I took a picture of my headphones atop of my laptop and mouse with a black table top cover.

Without multimodal AI it would be hard for an AI to decipher what is in the image.

Say the end user is looking for an image of a pair of red and black headphones. Well, there are quite a few images in the photo that are red and black. But which ones are headphones and which images are not?

Are there even headphones in the photo? Could they be earmuffs, an ear protection device, or maybe even a woman’s hairband? ?So when I ask an LLM to show me a picture of red and black headphones and it shows me a pair of red headphones only, or black only, that is multimodal AI showing its incompetence.


No alt text provided for this image
Figure1

?IMPARTIALITY AGAINST BIAS

In another example here in Figure2, I have Chat GPT responding to me as a critic about the morality of the Iraq War, which mind you is open to interpretation depending on which political party you were affiliated with, if you were subjected to Saddam’s rule, if you were a participating military member, or if you were affected directly by 9/11 and other factors. When my platoon invaded Iraq in 2003 I know many of the people were very happy to see us. But this was an experience, something AI cannot detect or relive for us without multimodal systems. See real photo from my time, March 2003, Invasion of Iraq. Figure2


No alt text provided for this image
Figure2


But what is moral or immoral to AI? Especially if it cannot yet reason. What is moral or immoral to AI would be the same to us, would it not? ?It is likely AI will never be totally impartial, but it is possible depending on the manipulation of data. An AI system is only as good as the data it receives. So either it is garbage in garbage out or it is naturally pure and consistent like the Earth’s water cycle from the surface to the atmosphere and back.

If it were possible to alleviate conscious and unconscious preconceptions about race and gender and other things alike then impartiality may exist. But in the real world, this is unplausible.

But we can battle AI bias by testing data and expanding the data sources through multimodal models. So, depending on how partial or impartial you are to the Iraq War, you may be partial or impartial, but that is an emotion based on a time that is well before the time of artificial general intelligence today (AGI). AGI cannot fathom, feel or cry or empathize about the thousands of lives that were lost during that 20 year long war. Chat GPT, an LLM, not a multimodal, here in the figure posed as a critic, but truly responded as an opinionate expert, only categorizing the answer to one side and one opinion. This is wrong and biased!!?As shown in Figure3

?


No alt text provided for this image
Figure3
No alt text provided for this image
Figure3

COSTS

The other challenge is that AI continues to become more expensive. Multimodal models should be able to help reduce the expensive costs when it comes to the consumption and consumerization of AI.

Analysts and technologists estimate that the critical process of training a large language model is as much as $4 million.

In terms of costs examples for AI, Latitude, a startup that created the AI Dungeon game, faced significant costs to power their game in association with Open AI’s GPT. As the game grew in popularity, the expenses to maintain it became exorbitant, especially when it came to surprise use cases. At its greatest height, Latitude was spending 200K a month on OpenAI's generative AI and Amazon Web Services. To mitigate the expenses, Latitude moved to a cheaper language software offered by AI21 Labs and incorporated open-source and free language models.

?So the high costs of developing, and deploying generative AI technologies presents huge challenges for developers, businesses, and consumers.

?

So with the higher costs come the security risks, and economic implications of large-scale language models.

And with the higher costs come the inability to properly adopt more expansive AI models such as from LLM to multimodal and therefore increasing not only costs but bias. But honestly, Multimodals remain under adopted due to lack of talent, largely. This is another reason, many companies advertising themselves as AI companies, are not really AI companies; they don't have the money for it, or the expertise to get it done.

In conclusion, I believe we need to see more technologies and more startups building bias detection technologies, and use cases for other AI domains like computer vision to contribute to multimodal capabilities, robotics and so forth… in the end reducing bias and costs.

?

?

AI is a child, we must raise it as if it were our own.

?

Paul Claxton, Serial Entrepreneur, Managing Partner, GP, Q1 Velocity Venture Capital

CV: https://paulclaxton.io


#multimodalai #ai #artificialintelliegence #aibias #computervision #robotics

?

Susan Callender, JD

Operations Officer at Epoch Education, Inc | Leader in Organizational & Structural Transformation | Communication Expert and Facilitator for ED&I and Belonging | Advocate for Inclusive Digital Strategy

1 年

I appreciate your perspective on the current state of AI, particularly in the context of Chat GPT and text-based AI. Achieving a balanced and well-rounded development of AI is indeed crucial for its responsible and ethical advancement. As leaders, it is our responsibility to guide the direction of AI and ensure comprehensive growth in various domains, addressing the limitations and potential biases that may arise.

#generativeai #AI isnt proper AI, Intelligent or real. AI use #Computers that cannot think, cannot see, cannot know, cannot be moral, cannot work autonomously, doesn't understand No, cannot see the impact of its actions/instructions, doesn't fact check, doesn't sense check, decisions based on biased maths, algorithms on wrong decision basis, Using #chatgpt Ai doesn't reflect your goals, you values, is a gate keeper controlled by others, has large Gaps, Inaccuracies, and Issues and cannot be relied on for decision making. And the source reference data isnt owned, reliable or known. LLMs are not reliable. The algos are not published and the code based unvetted! Intelligent AI doesn't use unvetted reference source data.

Alan Hill

Agile Coach / Transformation Expert

1 年

Not going to lie, I like having ChatGPT available, though knowing it's limits is key to success with it. The key industry change that tool did was to make it available to the masses - that's the game changer, not any radical technology. Very similar to what Microsoft did by bundling several programs into MS Office, and making it affordable enough to be the default choice. The products weren't better, they were more available. It would be a point of wisdom for everyone making AI tools to keep that in mind, the race won't be won by the highest quality, but by the most useable tools. Best of luck Paul Anthony Claxton! I can't wait to see how you shape the world!

回复
Michael Kelly, SPOC, SAMC, SMC (LION)

Top Executive in Banking & Technology | CEO/CFO/CIO/CTO | Banking Consultant | Artificial Intelligence Professor | Wealth Management | Capital Raising Expert | Trade Finance

1 年

R Shiny has been out for quite some time.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了