Artificial Intelligence (AI) is everywhere!
AI? Are you clueless about what Artificial Intelligence is all about? Ofcourse, nobody is clueless about it in 2020!
Yes, there is a possibility that what you know about AI might be different if you are a non-technical person. And it's totally okay not to know everything about everything.
Like a few years back, before stepping into this techinical field what I use to think about AI was- AI must be all about robotics and stuffs like that, and this image was created in my head due to the visuals that I saw back then. Like even rightnow, you can see at the top of the article we have this picture in which there is a robot hand, the very same way I saw such visuals and predicted thats what AI is all about. But the reality is, back then I never tried to dig deeper into this and study more about it. But as I started reading more and more about it day by day, I realised AI is not just about - ROBOTS and Robots taking over humans someday, its beyond that.
So now that you have landed here, I guess you are too interested in AI. So let's just start from the very basic things about AI and then we will move further step by step so that things are clear in the end.
What is Artificial Intelligence?
Artificial Intelligence (AI) is the branch of computer sciences that emphasizes the development of intelligence machines, thinking and working like humans. For example, speech recognition, problem-solving, learning and planning.
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal.
John McCarthy, widely recognized as the father of Artificial Intelligence, who on the 24th of October, 2011, passed away at the age of 84. He leaves behind him a great legacy in the field of Computer Science and Artificial Intelligence (AI).
AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.
Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.
Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.
Understanding Artificial Intelligence:
When most people hear the term artificial intelligence, the first thing they usually think of is robots. That's because big-budget films and novels weave stories about human-like machines that wreak havoc on Earth. But nothing could be further from the truth.
Humans are smarter than any type of AI -- for now.
Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to those that are even more complex. The goals of artificial intelligence include learning, reasoning, and perception.
Hanson Robotics' most advanced human-like robot, Sophia, personifies our dreams for the future of AI.
As technology advances, previous benchmarks that defined artificial intelligence become outdated. For example, machines that calculate basic functions or recognize text through optimal character recognition are no longer considered to embody artificial intelligence, since this function is now taken for granted as an inherent computer function.
AI is continuously evolving to benefit many different industries. Machines are wired using a cross-disciplinary approach based in mathematics, computer science, linguistics, psychology, and more.
Algorithms often play a very important part in the structure of artificial intelligence, where simple algorithms are used in simple applications, while more complex ones help frame strong artificial intelligence.
What are the three types of AI?
AI technologies are categorised by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications, and the theory of mind, which we’ll discuss in more depth below.
Using these characteristics for reference, all artificial intelligence systems - real and hypothetical - fall into one of three types:
- Artificial narrow intelligence (ANI), which has a narrow range of abilities;
- Artificial general intelligence (AGI), which is on par with human capabilities; or
- Artificial superintelligence (ASI), which is more capable than a human.
Artificial Narrow Intelligence (ANI) / Weak AI / Narrow AI:
Artificial narrow intelligence (ANI), also referred to as weak AI or narrow AI, is the only type of artificial intelligence we have successfully realized to date. Narrow AI is goal-oriented, designed to perform singular tasks - i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet - and is very intelligent at completing the specific task it is programmed to do.
While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behaviour based on a narrow range of parameters and contexts.
Consider the speech and language recognition of the Siri virtual assistant on iPhones, vision recognition of self-driving cars, and recommendation engines that suggest products you make like based on your purchase history. These systems can only learn or be taught to complete specific tasks.
Narrow AI has experienced numerous breakthroughs in the last decade, powered by achievements in machine learning and deep learning. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human-esque cognition and reasoning.
Narrow AI’s machine intelligence comes from the use of natural language processing (NLP) to perform tasks. NLP is evident in chatbots and similar AI technologies. By understanding speech and text in natural language, AI is programmed to interact with humans in a natural, personalised manner.
The $127 billion autonomous vehicle market is being driven by AI.
Narrow AI can either be reactive, or have a limited memory. Reactive AI is incredibly basic; it has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. Limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.
Most AI is limited memory AI, where machines use large volumes of data for deep learning. Deep learning enables personalised AI experiences, for example, virtual assistants or search engines that store your data and personalise your future experiences.
Examples of narrow AI:
- Google Search
- Siri by Apple, Alexa by Amazon, Cortana by Microsoft and other virtual assistants
- IBM’s Watson
- Image / facial recognition software
- Disease mapping and prediction tools
- Manufacturing and drone robots
- Email spam filters / social media monitoring tools for dangerous content
- Entertainment or marketing content recommendations based on watch/listen/purchase behaviour
- Self-driving cars
Artificial General Intelligence (AGI) / Strong AI / Deep AI:
Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the concept of a machine with general intelligence that mimics human intelligence and/or behaviours, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation.
AI researchers and scientists have not yet achieved strong AI. To succeed, they would need to find a way to make machines conscious, programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply experiential knowledge to a wider range of different problems.
Strong AI uses a theory of mind AI framework, which refers to the ability to discern needs, emotions, beliefs and thought processes of other intelligent entitles. Theory of mind level AI is not about replication or simulation, it’s about training machines to truly understand humans.
The immense challenge of achieving strong AI is not surprising when you consider that the human brain is the model for creating general intelligence. The lack of comprehensive knowledge on the functionality of the human brain has researchers struggling to replicate basic functions of sight and movement.
Fujitsu-built K, one of the fastest supercomputers, is one of the most notable attempts at achieving strong AI, but considering it took 40 minutes to simulate a single second of neural activity, it is difficult to determine whether or not strong AI will be achieved in our foreseeable future. As image and facial recognition technology advances, it is likely we will see an improvement in the ability of machines to learn and see.
Artificial Superintelligence (ASI):
Artificial super intelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behaviour; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability.
Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs and desires of its own.
In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have a greater memory and a faster ability to process and analyse data and stimuli. Consequently, the decision-making and problem solving capabilities of super intelligent beings would be far superior than those of human beings.
The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life, is pure speculation.
Strong AI vs. Weak AI:
AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.
Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese room test.
Augmented intelligence vs. artificial intelligence:
Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services. The concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality - remains within the realm of science fiction.
Four types of artificial intelligence
The categories are as follows:
- Type 1: Reactive machines. These AI systems have no memory and are task specific. An example is Deep Blue, the IBM chess program that beat Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and make predictions, but because it has no memory, it cannot use past experiences to inform future ones.
- Type 2: Limited memory. These AI systems have memory, so they can use past experiences to inform future decisions. Some of the decision-making functions in self-driving cars are designed this way.
Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years
- Type 3: Theory of mind. Theory of mind is a psychology term. When applied to AI, it means that the system would have the social intelligence to understand emotions. This type of AI will be able to infer human intentions and predict behavior, a necessary skill for AI systems to become integral members of human teams.
- Type 4: Self-awareness. In this category, AI systems have a sense of self, which gives them consciousness. Machines with self-awareness understand their own current state. This type of AI does not yet exist.
Cognitive computing and AI:
The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.
The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.
AI as a service (AIaaS):
Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.
Popular AI cloud offerings include the following:
- Amazon AI
- IBM Watson Assistant
- Microsoft Cognitive Services
- Google AI
Examples of AI technology
AI is incorporated into a variety of different types of technology. Here are six examples:
- Automation. When paired with AI technologies, automation tools can expand the volume and types of tasks performed. An example is robotic process automation (RPA), a type of software that automates repetitive, rules-based data processing tasks traditionally done by humans. When combined with machine learning and emerging AI tools, RPA can automate bigger portions of enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to process changes.
- Machine learning. This is the science of getting a computer to act without programming. Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three types of machine learning algorithms:
- Supervised learning. Data sets are labeled so that patterns can be detected and used to label new data sets.
- Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or differences.
- Reinforcement learning. Data sets aren't labeled but, after performing an action or several actions, the AI system is given feedback.
- Machine vision. This technology gives a machine the ability to see. Machine vision captures and analyzes visual information using a camera, analog-to-digital conversion and digital signal processing. It is often compared to human eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in a range of applications from signature identification to medical image analysis. Computer vision, which is focused on machine-based image processing, is often conflated with machine vision.
- Natural language processing. This is the processing of human language by a computer program. One of the older and best-known examples of NLP is spam detection, which looks at the subject line and text of an email and decides if it's junk. Current approaches to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis and speech recognition.
- Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently. For example, robots are used in assembly lines for car production or by NASA to move large objects in space. Researchers are also using machine learning to build robots that can interact in social settings.
- Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition and deep learning to build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.
Use cases of AI in industries:
1.Data Security: Malware is a huge — and growing — problem. In 2014, Kaspersky Lab said it had detected 325,000 new malware files every day. But, institutional intelligence company Deep Instinct says that each piece of new malware tends to have almost the same code as previous versions — only between 2 and 10% of the files change from iteration to iteration. Their learning model has no problem with the 2–10% variations, and can predict which files are malware with great accuracy. In other situations, machine learning algorithms can look for patterns in how data in the cloud is accessed, and report anomalies that could predict security breaches.
2.Personal Security: If you’ve flown on an airplane or attended a big public event lately, you almost certainly had to wait in long security screening lines. But machine learning is proving that it can be an asset to help eliminate false alarms and spot things human screeners might miss in security screenings at airports, stadiums, concerts, and other venues. That can speed up the process significantly and ensure safer events.
3.Financial Trading: Many people are eager to be able to predict what the stock markets will do on any given day — for obvious reasons. But machine learning algorithms are getting closer all the time. Many prestigious trading firms use proprietary systems to predict and execute trades at high speeds and high volume. Many of these rely on probabilities, but even a trade with a relatively low probability, at a high enough volume or speed, can turn huge profits for the firms. And humans can’t possibly compete with machines when it comes to consuming vast quantities of data or the speed with which they can execute a trade.
4.Healthcare: Machine learning algorithms can process more information and spot more patterns than their human counterparts. One study used computer assisted diagnosis (CAD) when to review the early mammography scans of women who later developed breast cancer, and the computer spotted 52% of the cancers as much as a year before the women were officially diagnosed. Additionally, machine learning can be used to understand risk factors for disease in large populations. The company Medecision developed an algorithm that was able to identify eight variables to predict avoidable hospitalizations in diabetes patients.
5.Marketing Personalization: The more you can understand about your customers, the better you can serve them, and the more you will sell. That’s the foundation behind marketing personalisation. Perhaps you’ve had the experience in which you visit an online store and look at a product but don’t buy it — and then see digital ads across the web for that exact product for days afterward. That kind of marketing personalization is just the tip of the iceberg. Companies can personalize which emails a customer receives, which direct mailings or coupons, which offers they see, which products show up as “recommended” and so on, all designed to lead the consumer more reliably towards a sale.
6.Fraud Detection: Machine learning is getting better and better at spotting potential cases of fraud across many different fields. PayPal, for example, is using machine learning to fight money laundering. The company has tools that compare millions of transactions and can precisely distinguish between legitimate and fraudulent transactions between buyers and sellers.
7.Recommendations: You’re probably familiar with this use if you use services like Amazon or Netflix. Intelligent machine learning algorithms analyze your activity and compare it to the millions of other users to determine what you might like to buy or binge watch next. These recommendations are getting smarter all the time, recognizing, for example, that you might purchase certain things as gifts (and not want the item yourself) or that there might be different family members who have different TV preferences.
8.Online Search: Perhaps the most famous use of machine learning, Google and its competitors are constantly improving what the search engine understands. Every time you execute a search on Google, the program watches how you respond to the results. If you click the top result and stay on that web page, we can assume you got the information you were looking for and the search was a success. If, on the other hand, you click to the second page of results, or type in a new search string without clicking any of the results, we can surmise that the search engine didn’t serve up the results you wanted — and the program can learn from that mistake to deliver a better result in the future.
9.Natural Language Processing (NLP): NLP is being used in all sorts of exciting applications across disciplines. Machine learning algorithms with natural language can stand in for customer service agents and more quickly route customers to the information they need. It’s being used to translate obscure legalese in contracts into plain language and help attorneys sort through large volumes of information to prepare for a case.
10. Smart Cars: IBM recently surveyed top auto executives, and 74% expected that we would see smart cars on the road by 2025. A smart car would not only integrate into the Internet of Things, but also learn about its owner and its environment. It might adjust the internal settings — temperature, audio, seat position, etc. — automatically based on the driver, report and even fix problems itself, drive itself, and offer real time advice about traffic and road conditions.
Some Final words regarding AI
No matter how intelligent future artificial intelligences become—even general ones—they will never be the same as human intelligences. As we have argued, the mental development needed for all complex intelligence depends on interactions with the environment and those interactions depend, in turn, on the body—especially the perceptive and motor systems. This, along with the fact that machines will not follow the same socialization and culture-acquisition processes as ours, further reinforces the conclusion that, no matter how sophisticated they become, these intelligences will be different from ours. The existence of intelligences unlike ours, and therefore alien to our values and human needs, calls for reflection on the possible ethical limitations of developing AI. Specifically, we agree with Weizenbaum’s affirmation (Weizenbaum, 1976) that no machine should ever make entirely autonomous decisions or give advice that call for, among other things, wisdom born of human experiences, and the recognition of human values.
84% of enterprises believe investing in AI will lead to greater competitive advantages.
The true danger of AI is not the highly improbable technological singularity produced by the existence of hypothetical future artificial superintelligences; the true dangers are already here. Today, the algorithms driving Internet search engines or the recommendation and personal-assistant systems on our cellphones, already have quite adequate knowledge of what we do, our preferences and tastes. They can even infer what we think about and how we feel. Access to massive amounts of data that we generate voluntarily is fundamental for this, as the analysis of such data from a variety of sources reveals relations and patterns that could not be detected without AI techniques. The result is an alarming loss of privacy. To avoid this, we should have the right to own a copy of all the personal data we generate, to control its use, and to decide who will have access to it and under what conditions, rather than it being in the hands of large corporations without knowing what they are really doing with our data.
In 2017, VC investors funded more than $10.8 billion in startup companies that focus on AI and machine learning.
AI is based on complex programming, and that means there will inevitably be errors. But even if it were possible to develop absolutely dependable software, there are ethical dilemmas that software developers need to keep in mind when designing it. For example, an autonomous vehicle could decide to run over a pedestrian in order to avoid a collision that could harm its occupants. Outfitting companies with advanced AI systems that make management and production more efficient will require fewer human employees and thus generate more unemployment. These ethical dilemmas are leading many AI experts to point out the need to regulate its development. In some cases, its use should even be prohibited.
The best five countries for an AI job are - China, USA, Japan, UK, and India.
One clear example is autonomous weapons. The three basic principles that govern armed conflict: discrimination (the need to distinguish between combatants and civilians, or between a combatant who is surrendering and one who is preparing to attack), proportionality (avoiding the disproportionate use of force), and precaution (minimizing the number of victims and material damage) are extraordinarily difficult to evaluate and it is therefore almost impossible for the AI systems in autonomous weapons to obey them. But even if, in the very long term, machines were to attain this capacity, it would be indecent to delegate the decision to kill to a machine. Beyond this kind of regulation, it is imperative to educate the citizenry as to the risks of intelligent technologies, and to insure that they have the necessary competence for controlling them, rather than being controlled by them. Our future citizens need to be much more informed, with a greater capacity to evaluate technological risks, with a greater critical sense and a willingness to exercise their rights. This training process must begin at school and continue at a university level. It is particularly necessary for science and engineering students to receive training in ethics that will allow them to better grasp the social implications of the technologies they will very likely be developing. Only when we invest in education will we achieve a society that can enjoy the advantages of intelligent technology while minimizing the risks. AI unquestionably has extraordinary potential to benefit society, as long as we use it properly and prudently. It is necessary to increase awareness of AI’s limitations, as well as to act collectively to guarantee that AI is used for the common good, in a safe, dependable, and responsible manner.
The road to truly intelligent AI will continue to be long and difficult. After all, this field is barely sixty years old, and, as Carl Sagan would have observed, sixty years are barely the blink of an eye on a cosmic time scale. Gabriel García Márquez put it more poetically in a 1936 speech (“The Cataclysm of Damocles”): “Since the appearance of visible life on Earth, 380 million years had to elapse in order for a butterfly to learn how to fly, 180 million years to create a rose with no other commitment than to be beautiful, and four geological eras in order for us human beings to be able to sing better than birds, and to be able to die from love.”
Power Apps | Power BI | Canvas Apps
4 年Well done! Thanks for sharing. ??
Data Engineering | Backend Development | Cloud Architecture | Distributed Systems
4 年Great article!???? Loved the way you explained the topic ??
Student at Calcutta Institute Of Engineering And Management
4 年Awasome work man really love your content...!!! ????
DevOps @Forescout ?? | Google Developer Expert | AWS | DevOps | 3X GCP | 1X Azure | 1X Terraform | Ansible | Kubernetes | SRE | Platform | Jenkins | Tech Blogger ??
4 年Great Faraz A.