AI models mirror our Good and Evil sides

AI models mirror our Good and Evil sides

As AI is being involved in many aspects of our life from digital assistants to education, art, healthcare, economy and courts. Sometimes, AI do the job perfectly. Sometimes, it makes mistakes, introduces bias and has evil behavior.

Does this sound familiar to you? Yes, humans also perform similarily in their positions.

We always hear news about mistakes of careless or evil employees that led to large material and human losses.

It is normal for AI to have the same behavior because AI models always learn from surrounding environment and data provided by humans which leveraging humans’ strengths, weaknesses and biases.

AI model is your baby

Photo by

When babies come to life, they do not have knowledge about anything and do not know how to perform any task even simple ones like eating, drinking, wearing clothes or tying their shoes.

But also, this baby has a brain which is able to store data about different topics and be trained to perform tasks and getting the performance improved over time.

It is parents’ responsibility to provide their children the correct data and instructions to train them effectively.

You can use the same analogy for AI models and specially deep neural networks which are widely used today. Those models simulate the structure of human brain consisting of layers billions of neurons with very complex connections between them.

First, the model is trained on a proper dataset depending on which task it is going to perform.

During training process, connections between neurons are modified and parameters’ values become more accurate iteration after the other until reaching the desired accuracy which cannot be 100% to avoid overfitting the training data.

Then, the model is tested like a black box on data it has not seen before. after proving good performance, it can be used publicly.

The learning process do not stop here. Most AI models now continue to learn and tune their parameters based on feedback provided by users.

So, the human factor is involved and affects how the model works starting from training and while it is being used.

We are the source of bias in AI models

As AI models depend on the data provided by humans for training at the first place, they are unfortunately affected by human bias which appears clearly in models’ behavior.

This bias is really harmful especially when the model is used in sensitive operations.

  • Face recognition models that self-driving cars use to identify pedestrians failed to detect black women. This will put people’s lives in danger if we decided to depend only on such models.
  • AI models used to filter bank loans applications resulted in unfair outcomes, because they amplified the bias in the data they were trained on. This was basically historical loan data previously processed by humans.
  • The bias in AI models also appeared clearly in legal systems used in courts to support judges’ decision making. Applying COMPAS algorithm for risk assesment in united states, black defendants were labeled as higher risk compared to white defendants. This also refelcts the bias in training dataset which contained more black criminals.

Humans deceive AI into taking harmful actions

Photo by

Large Language models used in apps like ChatGPT by themselves do not reveal dangerous information that can harm people or lie to their users.

Unfortunately, those models are not mature enough to prevent evil users from deceiving them by clever and tricky prompt scenarios that push the app to take undesired actions like:

  • Providing instructions for making bombs or nuclear weapons.
  • Generating commercial products based on copyrighted artwork.
  • deceiving the manager into buying stocks of poor performing company by prompting the model indirectly to hide emails containing those information.

AI safety and transparency

Photo by

We ( humans ) have good and evil sides, and here comes the role of laws to keep our society a safe place to live in by defining what are people allowed and not allowed to do.

AI behaves like humans and can sometimes make mistakes and harm people, so we need laws to govern how AI models work, what they are allowed to do and which data they can be trained on.

In 2023, the European parliament has reached an agreement on the world’s first law in artificial intelligence.

The law handles various issues related to AI:

  • Classifying people based on their personal charactaristics or economic status.
  • Biometric identification systems like facial recognition.
  • Generative AI and training on copyrighted work.

For those laws to be more effective, AI companies should have more transparency about models used in their apps, what data were used to train the models and what values they align to.

Knowing these aspects will make it clear if using the model can cause any harm.

Conclusion

Photo by

AI is not something magical. With all its amazing capabilities, it reamains a tool in our hands and it’s our resposibility to control it and minimize its side effects.


Thank you for taking the time to read this article. I really appreciate this. Please, leave your thoughts and suggestions in the comment. I ‘ll wait for them :)


Your observation highlights the duality of AI, mirroring human strengths and weaknesses, which underscores the importance of thoughtful implementation and oversight. ???? Generative AI, specifically, can elevate the quality of work across various sectors by enhancing creativity and efficiency, while also learning to mitigate biases when properly trained. Let's explore how generative AI can revolutionize your workflow and address these challenges—book a call to dive into the potential of AI tailored to your needs. ?? Cindy

要查看或添加评论,请登录

社区洞察

其他会员也浏览了