AI Unstoppable - Misuse posing danger

AI Unstoppable - Misuse posing danger

We all know that AI has come a boon but at time it is becoming bane. Mainly because we? always think about beneficial and positive aspects of AI. Before going in to the details of misuse of? AI which is having dangerous consequences, we must remember what the millionaire Elon Musk? who has owned the Twitter, according to him in the next 5 years AI will bypass the human? intelligence in real sense it will be super intelligent. He has emphasised further that AI is more? dangerous than an atomic bomb. PM Modi has also said that AI is being misused and it is necessary? for us to be alert.??

It has become an everyday affair to read in the newspaper about cyber security and digital? arrest, even the highly intelligent and intellectual people are becoming the victim of scams mainly? who are active on telegram WhatsApp and Facebook and Instagram etc. The knowledge of internet? banking, skype, google meet and zoom among people is increasing the chances of getting caught? by the cyber criminals. There is no word like digital arrest but still people are being lured for the? greed of high return in the share market. They get attacked by a message on social media in their? accounts. With the help of AI, the cyber criminals get to know the people in and out.? Advertisements come in the form of free investment skills zero loss schemes for high returns and? in most of the cases the threats come for arresting a person informing that their parcel has been? seized and warrants are issued and the criminals pose themselves as a police officer. There are? bogusstock trading apps, website links, fake demat accounts and sometimes people are lured with? members groups having hundreds of fake members. Even the DP’s are fake in the group. The? people are losing the money in their bank accounts, their mobile phones are hacked and once a? person gets to know your aadhar number and phone number then it is very easy for scammers to? have access.??

Deepfake danger to privacy and reputation??

The technological advancement in the field of AI is making tremendous progress that a one minute video or around 40 pictures can be faked as real and this feature of AI is coming in the form? of deep fake. Recently the actress Rashmika Mandana complained about the deep fake of her face? with some other lady. Actress Kajol also complained about such video. It is very difficult to make? out as to whether a particular video is real or fake. Deep fakes also weaponised to smear some of? the important women leaders of other countries it has been reported that information minister of? Pakistan MS Ajma Bukhari became the victim of her counterfeit image since a sexualised deep fake? video was published. She is one of the prominent leaders of the Punjab province. It becomes difficult for the popular leaders to convince that the video of her face was super imposed on the? sexualised body of an Indian actor. Shamelessness has no limits for anti-social elements because? according to Ajma Bukhari the photos of her husband and son were manipulated to imply as if she? is in public with other man. Why we are talking about Pakistan because the media literacy is poor? and taking this advantage, deep fakes are being weaponised to smear woman in the public sphere? with sexual contents deeply damaging their reputation in a country with conservative mores.?

?In another case, the deep fake was used in different manner when Ex-Prime Minister Imran? Khan was in prison but his team used an AI tool to generate speeches in his voice shared on social? media which helped him to campaign behind bars.??

?The harmful effects of deep fake for a man in politics come when they are typically criticising? for their ideology, corruption, and status. For a woman it is more dangerous for tearing down her?

image. Agence France-Presee commonly known as AFP has taken the opinion of US based AI expert? Henry Ajder who said, “When they are accused, it almost always revolves around their sex lives,? their personal lives, whether they are good mums, whether they are good wives. He further? stressed that deep fakes are very harmful weapon.?

?There are instances that people are getting blackmailed and criminals are deep faking the? voice and picture of the near relative making a call seeking money on the pretext of an emergency.? The only safeguard is AI cannot copy the human emotions. Another danger of AI is area of privacy,? today if you are on Facebook, Linked-in, Twitter or Whatsapp then it is very easy with the help of? AI to monitor your online and offline activity because the face detection and algorithm can easily? identify the movements.??

Deep fake romance Scams?

Every day we read about cyber scams digital arrest and other frauds like dating scams are quite? common where the people are blackmailed. Nowadays the AI generated deep fake romance has? come as a challenge. Recently the Hong Kong police informed about the spreading of this scam in? Asia. This deep fake scam starts with a video massage by beautiful lady which in fact is the AI? persona. In some cases, the morphed video, picture and voices are used which are created with? deep fake technique so that victims believe that he is having romance with the real lady. The? blackmailing process begins by seeking money for medical urgency etc. The deep fake videos are? created with the help of AI and money is extorted for fake romance with AI created beautiful lady.??

In India, the condition is even worse because the growth and advancement in the field of AI is? tremendous and the deep fake video of top management are circulated over social media giving? financial advices to the people. The Reserve Bank of India has cautioned the public about the fake? videos of their Governor which are circulated on social media regarding some fake investment? schemes. These videos attempt to advise people to invest their money in such schemes through? the use of technological tools. Nobody believed that such negative harmful effects will come with? introduction of AI.??

Fake experts in trading scams?

?During last few years there is boom in stock market mainly because there are trading? applications which help the investors in getting the knowledge of share market and there are large? number of advertisements publishing their applications on Whatsapp, Telegram, Facebook and? Instagram. These advertisements claim easy earning in the share market. These fake applications? are the tools for the cyber criminals. It is like digital arrest that trading scams are spread with the? help of AI. The scammers are using fake profiles and their modus operandi is unique; to begin with, they lure people for online investment and part time work from home assignments. These? advertisements come on the social networking sites giving fake stock trading apps and fake? websites links with the assurance of high returns. The fake audio videos are generated with? investment advisor to open fake demat account. The fake groups are created by cyber criminals as? sole admin after sending the link of social media and people are encouraged to join a group which? consist of 200 to 500 group members heaving their attractive DP. The fake discussion of advisor? with the members is shown by way of chatting indicating screenshots of high profits in shares. The? people are duped by joining these groups and investing money.??

Now the question come why and how? Scammers catch hold of the most intelligent people too. First of all, they get influenced for learning investment skills for free of cost with the guarantee?

of zero loss schemes and 100% returns. These scammers also advertise that they are registered? with SEBI and RBI as advisor.?

Let us know about the incident of cybercrime victims who were holding very high positions? but still became the victim. A woman from Bombay who is an IAS officer got trapped in the high? return scam and lost her 1crore rupees. Scammer claimed itself as an international expert. One? another incident happened with the renowned IG, an IPS officer DK Pandey. He earned 381 crores? through online trading which is extorted by the scammer. There are large number of such cases? where the educated people lost the money by the way of these cyber scams. The wife of one of? the industrialists of Kodarpuram, Tamil Nadu downloaded online share trading app and invested? over 10 crore rupees, the scammer showed the profit on virtual account on the app though the? same was not reflected on her actual account she realised quite late that she lost the money.?

Cyber-attack targeting safety and security??

?Over a period of time, cyber criminals have expanded their fraud tactics beyond imaginations? they have now started manipulating narrative deploying disinformation to destabilise organisation? and tarnish their reputations. In the recent past, a leading insurance firm became victim of data? breach it was not just a case of stolen data but a calculated attempt to destroy the career of the? CEO of the company. In real practice the hacker with the name “XenZen” did not just breach the? insurance company’s system with fabricated E-mail but also tried to convince the world that the? CEO has willing handed over the sensitive data. This acquisition sparked handline which was not? true. Let us go deep into this data breach story. The hacker “XenZen” posted an offer to sell 7TB? customer data stolen from the insurance firm which involved the personal information including? the names, addresses and health records over 31 million people for grabs on the dark web. The? breach itself was very real on massive scale and the hacker “XenZen” claimed that the CEO had? leaked the data. Later on, it was revealed that the CEO involvement was fabricated and “XenZen”? has doctored an email using the trick of altering the HTML code with the inspect aliment function.? It was very easy way to look as if CEO had sent sensitive information.?

?The hacker found the credentials on the dark web as separate credential breach to exploit the? vulnerability in the company system. It became the case of exploiting technical flaw because? XenZen had stolen the credentials without insider help to have the access of the companies? database they exploited the insecure direct object reference (IDR) vulnerability in the companies? API (Application Programming Interface). Which is a type of security flaw that allows unauthorized? users to access sensitive data simply by manipulating URL (Uniform Resource Locator). In this case? of insurance company this flaw gave the hacker XenZen access to 7 TB customer information? allowing him to steal the data without raising any red flags. XenZen never intended to prove insider? collision but their basic objective was not to destroy the reputation of the person responsible for? protecting the data.??

AI advancement - unanticipated misuse?

In today’s scenario, data is the most important commodity which can be used for beneficial? purposes as well as can be misused by cyber criminals. Our personal financial and health? information are sensitive in nature and biggest danger of AI comes when these are misused by? hacker. In order understand the misuse let us take a live example.??

Johnson was suffering with a skin disease due to immunity disorder. He consulted the physician? who advised certain medical tests. The system is so quick and efficient in pathological laboratories?

that Johnson got the report online within few hours. He talks to the doctor on the phone call who? advised to come next day morning. Johnson was not able to wait for knowing the result of test? conducted and thought of using AI and uploaded the test report on AI chatbot. The response way? very quick and he got detailed information about his disease but the matter never ended there.? Soon after Johnson started getting advertisement related to his disease on his all-social media? accounts. After sometime he was getting calls from different hospitals recommending treatment? and hospitalisation. Johnson realised that his data has been leaked by AI chatbot.??

It has been observed that personal data leakage can pose dangerous consequences, sometimes we use chatbot for solving the questions and also for data collection in order to? process. In this case if the data collection is not safe or the tools for processing are not secured? then your data can be hacked by cyber criminals.??

?Financial data is always in demand by the hackers such as bank account number credit card? details email id aadhar number and address etc. The cyber criminals can use the data in creating? your fake profile which can damage your credit profile. As per the 2020 report, around 4.35 billion? data reach cyber criminals through networking sites sometimes if chat bot data storage is not? sufficient and inscription standards are not high than the data can be stolen.??

?Business data are quite sensitive like data of product launched, business strategy and other? financial details of the company, etc.?

AI whistleblower dies by suicide for ethics and values?

?We are talking dangers of AI and the most dangerous situation came when a 26 years old Indian? origin, former employee, Suchir Balaji artificial intelligence giant OpenAI died by suicide in Sans? Francisco recently on 26 November 2024.?

?Balaji was a whistle-blower against the AI giant OpenAI. Balaji worked there for nearly four? years, his suicide came after three months when he publicly accused OpenAI of violating US? copyright law while developing Chatgpt which is a generative AI program used by hundreds of? millions of people across the world as money making sensation. In late 2022, the law suits were? filed against OpenAI by authors computer programmer and journalist for illegally stealing their? copyright material valuing 150 billion US dollar.??

Nearly one month before his suicide, on 23rd October 2024, Balaji gave an interview in New? York Times openly saying that OpenAI is harming business and entrepreneur whose data were? used to train Chatgpt. Balaji left OpenAI because he no longer wanted to contribute to? technologies that he believed would bring society more harm than benefit. He said, “If you believe? what I believe you have to just leave the company this is not a sustainable model for the internet? eco system as a whole”. Earlier in a post on X in October itself Balaji said, “I initially did not know? much about copyright, fair use etc., when I tried to understand the issue better, I eventually came? to the conclusion that fair use seems like a pretty implausible defence for a lot of generative AI? products, for the basic reason that they can create substitutes that compete with the data they are trained on”.?

?Suchir Balaji Indian origin young bright technocrat gave his life for the shake of professional? ethics and values in the larger interest of mankind. He was brought up in Cupertino and studied? Computer Science at UC Barkley. In absence of any suicide note reported so far, his mother has? requested privacy while grieving the death of her son.

Prof. (Dr.) Dewakar Goel

Executive Director Director HR at Airports Authority of India | Author of 25+ Books | Chairman at Aero Academy of Aviation Science & Technology.

1 个月

Happy to know feedback

回复

要查看或添加评论,请登录

Prof. (Dr.) Dewakar Goel的更多文章

  • AI touching the virgin fields

    AI touching the virgin fields

    In today’s scenario the transformation innovation and development in the field of AI is beyond belief. There is no…

    1 条评论
  • Managing change in Aviation

    Managing change in Aviation

    Nov 2020

    1 条评论
  • Article July 2023

    Article July 2023

    Article july2023

  • Who says... we are not punctual

    Who says... we are not punctual

    When we talk about work culture in different metros of the country a much generalized feeling comes in the mind that…

    1 条评论
  • God Never Speaks.....or he does?

    God Never Speaks.....or he does?

    We all go to the temple, mosque, church, or gurudwara depending upon our faith and religion, we may not find a statue…

社区洞察

其他会员也浏览了