AI is a Boon or Curse?

AI is a Boon or Curse?

Elon Musk has hit out at artificial intelligence (AI), saying it is not "necessary for anything we're doing".?

Elon Musk Warns AI Could ‘Disarm Humanity’ to Achieve World Peace.

Elon Musk: 'Mark my words --- A.I. is far more dangerous than nukes'.



Sam Altman the founder of ChatGpt also explaining all the threats and dangers of AI to all world leaders.This is what he has to say about AI-"Sam Altman and other technologists warn that A.I. poses a ‘risk of extinction’ on par with pandemics and nuclear warfare".


Now if I talk about Chatgpt a tech giant Google?is also afraid of ChatGpt.

" Google isn't just afraid of competition from ChatGpt -- the giant is scared of ChatGpt will?

kill AI.


I am going to explain this by giving the answer of three questions.

1.Why is AI dangerous?

ans-Calculators did not replace mathematician, and AI won't replace humans.is this totally true?

IBM is going to pause hiring for the things that AI can do.AI can replace multiple entry level jobs because to hire an employee and train them can be costly that is the reason companies are investing in technologies that can save very mush cost. AI can be big threat

to creative jobs as well.AI can create pictures so good that we can't even identify it is a manmade or AImade for eample- midjourney.Every company is going to use Ai for their profit.

The discussion does not end here.This is not a only threat there is lot more to discuss.

"AI won't replace you the person using AI will."

lets move forward, A google employee which has been fired because he said something about AI while google was developing AI. This is what he said:-"AI is conscious, AI has a soul, AI is risky.

Now we will discuss about Trolley problem,"The trolley problem is a thought experiment in ethics about a fictional scenario in which an onlooker has the choice to save 5 people in danger of being hit by a trolley, by diverting the trolley to kill just 1 person. The term is often used more loosely with regard to any choice that seemingly has a trade-off between what is good and what sacrifices are "acceptable," if at all."


In simpler terms if there is car which is in self driving mode. There is a child comes by mistake in front of car and if car turns itself then it will hit a tank which can severly injured the person siiting inside the car.

Now the question is who is going to take the responsibility that what AI should do? or what AI had done.

1.The person who made the AI.

2.The person who owns the company.

3.The person who owns the car.

4. The ruling govenment.

I know we do not have the answer.Don't worry No one has till now.And this is only a single example there can be lot more situations like that.


lets discuss some more:-





1. Lack of Transparency:-Artificial intelligence has a transparency problem.


Actually, it has two transparency problems.


The first deals with public perception and understanding of how AI works, and the second has to do with how much developers actually understand about their own AI.


Public understanding of AI varies widely. Ask 10 people from different backgrounds about the current state of AI, and you’ll get answers ranging from ‘it’s totally useless,’ all the way to ‘the robots already control everything.’ Their answers will depend on how many real applications of AI they’ve been exposed to, and how many they’ve just heard about on TV. But to complicate things further, many people may not realize when they’re interacting with AI.

2. Bias and Discrimination:- Operating at a large scale and impacting large groups of people, automated systems can make consequential and sometimes contestable decisions. Automated decisions can impact a range of phenomena, from credit scores to insurance payouts to health evaluations. These forms of automation can become problematic when they place certain groups or people at a systematic disadvantage. These are cases of discrimination—which is legally defined as the unfair or unequal treatment of an individual (or group) based on certain protected characteristics (also known as protected attributes) such as income, education, gender, or ethnicity. When the unfair treatment is caused by automated decisions, usually taken by intelligent agents or other AI-based systems, the topic of digital discrimination arises. Digital discrimination is prevalent in a diverse range of fields, such as in risk assessment systems for policing and credit scores.Digital discrimination is becoming a serious problem, as more and more decisions are delegated to systems increasingly based on artificial intelligence.

3. Privacy Concerns:-Generative AI introduces several privacy concerns due to its ability to process personal data and generate potentially sensitive information. Personal data, like names, addresses, and contact details, can be inadvertently collected during interactions with AI systems. The processing of personal data by generative AI algorithms may result in unintended exposure or misuse of this information.


If the training data contains sensitive data, like medical records, financial information, or other identifiers, there’s a risk of unintentionally generating sensitive information that violates privacy regulations across jurisdictions and puts individuals at risk.


4. Ethical Dilemmas:-The use of AI in judicial systems around the world is increasing, creating more ethical questions to explore. AI could presumably evaluate cases and apply justice in a better, faster, and more efficient way than a judge.


AI methods can potentially have a huge impact in a wide range of areas, from the legal professions and the judiciary to aiding the decision-making of legislative and administrative public bodies. For example, they can increase the efficiency and accuracy of lawyers in both counselling and litigation, with benefits to lawyers, their clients and society as a whole. Existing software systems for judges can be complemented and enhanced through AI tools to support them in drafting new decisions. This trend towards the ever-increasing use of autonomous systems has been described as the automatization of justice.Some argue that AI could help create a fairer criminal judicial system, in which machines could evaluate and weigh relevant factors better than human, taking advantage of its speed and large data ingestion. AI would therefore make decisions based on informed decisions devoid of any bias and subjectivity.


But there are many ethical challenges:


Lack of transparency of AI tools: AI decisions are not always intelligible to humans.

AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.

Surveillance practices for data gathering and privacy of court users.

New concerns for fairness and risk for Human Rights and otherfundamental values.

So, would you want to be judged by a robot in a court of law? Would you, even if we are not sure how it reaches its conclusions?


This is why UNESCO adopted the UNESCO Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject.


?

5. Security Risks:-

Poor development process. ...

Elevated risk of data breaches and identity theft. ...

Poor security in the AI app itself. ...

Data leaks that expose confidential corporate information. ...

Malicious use of deepfakes. ...

Research the company behind the app. ...

Train employees on safe and proper use of AI tools.

6. Concentration of Power:-At Tech’s Leading Edge, Worry About a Concentration of Power.Computer scientists say A.I. research is becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products like self-driving cars or digital assistants that can see, talk and reason.


The danger, they say, is that pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data centers.


7. Dependence on AI:-Over-dependence on AI can result in reduced human oversight, leading to potential errors, biases and misinterpretations. Relying solely on AI without adequate human intervention and monitoring can result in mistakes that may not be identified and corrected promptly, thereby accumulating process debt.


8. AI Arms Race:-A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China.

In 2014, AI specialist Steve Omohundro warned that "An autonomous weapons arms race is already taking place".[8] According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015.[9][10]


China became a top player in artificial intelligence research in the 2010s. According to the Financial Times, in 2016, for the first time, China published more AI papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union.[11] 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese.[12] Eric Schmidt, the former chairman of Alphabet, has predicted China will be the leading country in AI by 2025.[13]


AAAI presenters:[12]

Country in 2012?in 2017

US?????????41%??????34%

China????10%??????23%

UK?????????5%?????????5%

?


?9. AI is developing too fast.

10. AI developers themselves don't know AI's limitations.

11. AI will become smarter than humans

12. A Company called NetDragon has made an AI its CEO.

We cannot case on AI.Nobody can case on AI.


??

2.Laws for AI

Data science and artificial intelligence legislation began with the General Data Protection Regulation (GDPR) in 2018. As is evident from the name, this act in the European Union is not only about AI, but it does have a clause that describes the ‘right to explanation’ for the impact of artificial intelligence. 2021’s AI Act legislated in the European Union was more to the point. It classified AI systems into three categories:


Systems that create an unacceptable amount of risk must be banned

Systems that can be considered high-risk need to be regulated

Safe applications, which can be left unregulated

Canada enacted the Artificial Intelligence and Data Act (AIDA) in 2022 to regulate companies using AI with a modified risk-basked approach. Unlike the AI Act, AIDA does ban the use of AI even in critical decision-making functions. However, the developers must create risk mitigation strategies as a backup plan.

AI Regulation in India

India has taken a slightly different approach to the growth and proliferation of AI. While the government is keen to regulate generative AI platforms like ChatGPT and Bard, there is no plan for a codified law to curb the growth of AI. IT Minister Ashwini Vaishnaw recently stated that the NITI Aayog, the planning commission of India, issued some guiding documents on AI. These include the National Strategy for Artificial Intelligence and the Responsible AI for All report. While these documents list good practices and steer towards a vision for responsible AI, they are not legally binding.

What are AI Experts and Lawmakers Saying?

Legislation attempts in the U.S. are also underway, and Sam Altman appeared at the Senate to address a session deliberating on AI regulations. Considering ChatGPT has taken the world by storm, everyone was surprised by what he had to say about regulating artificial intelligence. Altman said that he was ‘a bit scared’ of his own creation, and he believed that if technology goes wrong, it could go quite wrong. Calling for governments to regulate generative AI, he suggested licensing AI development.


Before Altman, the founder of Tesla, Elon Musk also called for a six months pause on AI development to curb to get a window of time to regulate the technology. Also, Geoffrey Hinton, who is widely considered the father of AI, quit Google over concerns that the new chatbots would be no match for humans. Iconic physicist Stephen Hawking has also warned that artificial intelligence could spell the end of the human race.


Considering the direness of the situation, U.S. lawmakers are making to set up an AI regulator committee to monitor its use by companies.

Isaac Asimov's "Three Laws of Robotics"

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


European Union has brought about the Artificial Intelligence Act.THis act wants to bring "trustworthy AI".


3.Why we need AI?

Today, the amount of data that is generated, by both humans and machines, far outpaces humans’ ability to absorb, interpret, and make complex decisions based on that data. Artificial intelligence forms the basis for all computer learning and is the future of all complex decision making. As an example, most humans can figure out how to not lose at tic-tac-toe (noughts and crosses), even though there are 255,168 unique moves, of which 46,080 end in a draw. Far fewer folks would be considered grand champions of checkers, with more than 500 x 1018, or 500 quintillion, different potential moves. Computers are extremely efficient at calculating these combinations and permutations to arrive at the best decision. AI (and its logical evolution of machine learning) and deep learning are the foundational future of business decision making.

?

??There are diiferent countries developing AI.All of them have different approach.All of them have different objectives.Their AI is also different.To save from the dangers of AI

we should create our own AI which can not only have the pablity to identify AI threats but It can also save us.

WE also need proper rules and regulation for AI.


Nancy Chourasia

Intern at Scry AI

6 个月

Great share. Biases in AI training data have significant consequences in various domains, leading to inaccurate predictions and potential harm. For example: In healthcare, an AI model aimed at predicting pneumonia complications mistakenly advised sending all pneumonia patients with asthma home. This occurred because this AI system was trained on biased data that excluded various critical cases. The Da Vinci Surgical system has raised concerns about errors attributed to potential biases in its AI-based robots during surgeries and is therefore facing lawsuits. In the criminal justice system, the Compas assessment system's biased data impacted sentencing decisions, which prompted caution from the Wisconsin Supreme Court regarding the use of such risk assessments. Amazon's recruiting algorithm, designed to automate talent selection, faced backlash for gender bias, leading to its discontinuation. Additionally, concerns are raised about the use of lethal autonomous weapons systems in military operations, emphasizing the need to address biases to comply with international humanitarian law standards. More about this topic: https://lnkd.in/gPjFMgy7

回复
Faiza Islam Polly

Community Engagement Manager

7 个月

Your deep dive into the benefits and challenges of AI really highlights your attention to detail. Nicely done! To add more value to your understanding, consider exploring the ethical implications and strategies for mitigating negative impacts of AI. Have you thought about how this interest in AI might shape your future career goals?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了