The Building Blocks of Artificial Intelligence

The Building Blocks of Artificial Intelligence

“Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.” - John McCarthy.

The field of AI is built atop a plethora of academic disciplines. From philosophy to cognitive science. In this essay, I will attempt to talk about the fields that have contributed ideas, concepts, techniques, and viewpoints to the field of Artificial Intelligence.

I hope that knowing about the cascaded nature of the field of Artificial intelligence will make you (the reader) appreciate how much knowledge work has gone into taking humanity to the heights it has achieved as well as know that not a single person/company, field, or region will be responsible for building the AI systems of the future.

Let’s dive right in..

Philosophy

The role of Philosophy in the advent of AI is quite intuitive. At its core Philosophy is an attempt by Human beings to understand the human condition and our place in the world and at its core, the study of Artificial Intelligence is the study of how to mimic human intelligence.

How Philosophy contributes to the field of AI

  • Ethics - Philosophy provides ethical frameworks to grapple with the moral implications of AI systems as they take on higher-stakes roles in society. Areas like machine ethics and algorithmic bias draw heavily on moral philosophy.
  • Metaphysics - Philosophical discussions around consciousness, qualia, and the nature of the mind are relevant to developing AI that approximates or achieves consciousness. Philosophy also examines metaphysical questions around AI personhood.
  • Epistemology - Philosophical theories of knowledge acquisition, reasoning, and cognition inform efforts to replicate intelligence and reasoning in machines. Epistemology provides models for conceptual learning.
  • Logic - Formal logic is foundational to fields like knowledge representation, automated reasoning, and inference in AI. Philosophical logic provides formal rule sets that AI systems can implement.
  • Language - Philosophy of language, linguistics, and semantics inform computational approaches to natural language processing and communication in intelligent systems.
  • Existential risk - Philosophers contribute vital perspectives on existential risks from artificial general intelligence and how to align advanced AI with human values.
  • Theoretical framing - Philosophy supplies theoretical grounding, conceptual clarification, and diverse lenses for analyzing the capabilities and impacts of AI technology.

Mathematics

Probably no other fields (except Philosophy & Neuroscience of course) have contributed more to the field of AI like Mathematics. Maths is the language of formalization, so while philosophers formed the fundamental laws surrounding AI, mathematics was required before it could be treated as a serious science which could be quantified and resultantly progress.

The contribution of Mathematics to the field of AI centred around answering 3 crucial questions:

  • How do we reason with uncertain information?
  • What can be computed?
  • What are the valid rules from which we can draw conclusions?

The answers to these questions are what we know as Probability, Computation, and Logic, respectively.

How Mathematics contributes to the field of AI

  • Algorithms - Mathematical logic underpins the algorithmic techniques used in areas like machine learning, neural networks, robotics, and planning/optimization. Algorithms leverage computational math operations.
  • Probability & Statistics - Probability theory and statistical methods are core to techniques like Bayesian networks, stochastic modelling, and machine learning. Stats enable handling uncertainty.
  • Linear Algebra - Linear algebra and multivariate calculus are widely used for representing and manipulating data needed for machine learning. Matrix math is especially important.
  • Incompleteness Theorem: G?del's incompleteness theorems state that any sufficiently complex formal system will contain true statements that cannot be proven within the system. This applies to AI systems trying to reason about mathematics and the world through logical formal systems. It shows there are limits to what can be proved and computed. This informs AI approaches to knowledge representation and reasoning.
  • Computability Theory: This provides a framework to classify problems by computational complexity. It allows AI researchers to determine which problems are tractable for algorithms to solve. Computability theory helps guide efficient approaches.
  • Tractability: A problem is considered tractable if it can be solved by an algorithm within a reasonable time. Many problems relevant to AI like combinatorial optimization are intractable in their general form. Tractability guides the design of algorithms and data structures for AI systems. The goal is to find ways to make hard problems tractable for practical use.
  • NP-Completeness: NP-completeness refers to a class of computational problems that are the most difficult to solve within NP(Nondeterministic Polynomial Time). Many important problems in AI like planning, scheduling, and combinatorial optimization are NP-complete. This means optimal or exact solutions cannot be found efficiently as the problem sizes get larger. classifying problems as NP-complete helps AI researchers focus their efforts. It guides them to develop approximation algorithms, heuristics, machine learning approaches, and other practical techniques to make progress on these difficult problems.
  • Control Theory - Mathematical control theory aids in controlling the complex behaviour of autonomous systems like robots and self-driving vehicles.

Economics

Economics contrary to popular opinion is not solely about money but rather how people make choices with scarce resources to achieve their goals (preferred outcome).

On the other hand, AI involves training mathematical models on data of situations which have led to certain outcomes in order to get an algorithm to develop a program which consistently achieves the expected outcome(this is of course overly simplified).

How Economics contributes to the field of AI

  • Utility: The economic concept of utility, or the satisfaction derived from consuming goods and services, is important for AI goal-setting and reward functions. Maximizing utility provides an objective for agents to optimize in environments. Utility theory helps define problem objectives.
  • Game Theory: Game theory analyzes optimal decisions in competitive situations. It provides strategies when multiple AI players with conflicting interests interact, like in Reinforcement Learning systems. Game theory offers equilibria solutions for multi-agent systems.
  • Decision Theory: Decision theory deals with making choices under uncertainty and expected utility maximization. It provides principles for AI decision-making and planning in uncertain, real-world environments based on probabilities and inferred values of outcomes.
  • Operations Research: Operations research uses mathematical models to optimize processes and make decisions. Markov decision processes model sequential decision-making in AI planning problems to maximize expected long-term rewards through reinforcement learning.
  • Satisficing: Satisficing refers to acceptably good solutions that may not be theoretically optimal. Bounded rationality and satisficing help focus AI solutions on sufficient real-world performance rather than perfect solutions which are often infeasible to find.


Neuroscience

The impact of Neuroscience is probably the most intuitive. Simplistically, AI is the study of building systems that think and act like humans. Neuroscience is the study of the human brain which is the data processing center of the brain. As you can hopefully see, the connection is immediately apparent.

How the field of Neuroscience contributes to the field of AI

  • Neurons: Understanding the structure and function of biological neurons in the brain-inspired the development of artificial neural networks, a foundational technique in deep learning. Neural networks are made up of interconnected nodes like neurons, using models of how neurons activate to learn complex patterns. The analogies between artificial and biological neural processing have advanced AI capabilities.
  • Singularity: The Singularity is a hypothetical point in time when artificial intelligence exceeds human intelligence. The idea stems from analyses of exponential growth in computing power and extrapolations about AI's potentially recursive self-improving. While highly speculative, the concept informs discussions around anticipated advances in AI capabilities and the need to ensure advanced AI remains beneficial. Planning for the profound impacts of hypothetical strong AI depends on perspectives from neuroscience on the nature of intelligence.


Psychology

In simple words, Psychology is the study of how humans act. Due to the fact that we have covered the fact that AI is built to emulate humans, I will presume that the presence of Psychology on this list is self-explanatory.

How thenbsp;field of Psychology contributes to the field of AI

  • Behaviourism: Behaviorism focuses on objective, observable behaviours and how they can be moulded by environmental stimuli and reinforcement. This inspired current reinforcement learning systems which are built on behaviorist theories.
  • Cognitive Psychology: Cognitive psychology studies internal mental processes underlying behaviours like perception, thinking, memory, attention, language, etc. This informs AI tasks like computer vision, natural language processing, planning, and problem-solving by providing insights into human cognition. Cognitive architectures in AI emulate theoretical models from cognitive psychology of how the mind processes information.


Computer Engineering

The impact of Computer Engineering on the field of AI is sometimes understated. Computer Engineering gave us the computers necessary to build today’s AI systems.

How thenbsp;field of Computer Engineering contributes to the field of AI

  • TPUs: Tensor Processing Units (TPUs) are application-specific integrated circuits (ASICs) developed by Google specifically for neural network machine learning. TPU hardware architecture is optimized for the dense linear algebra operations required in deep learning models. TPUs accelerate training and inference in large neural networks, enabling more advanced AI applications.
  • GPUs: Graphics Processing Units (GPUs) are computing chips originally designed for graphics rendering that also excel at parallel processing. Their high memory bandwidth and computational power make them well-suited for running deep-learning algorithms. GPUs enabled the training of complex neural networks and catalyzed the rise of deep learning for AI.

Woodley B. Preucil, CFA

Senior Managing Director

10 个月

Edem Gold Fascinating read. Thank you for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了