AI: Cognitive Technology As A New World View and Paradigm Shift Of The 21st Century

AI: Cognitive Technology As A New World View and Paradigm Shift Of The 21st Century

By Maan Barazy

President co-founder of National Council for Entrepreneurship and Innovation – Entrepreneurs Ventures Network Holding SAL – Data and Investment Consult Lebanon SARL

Twitter:@Maanbarazy - ?linkedin.com/in/maan-barazy-ba873921

Abstract

Artificial Intelligence (AI) and the latest advancements in Information Technology (IT) have drastically altered our view of the world. With the ability to process and analyze vast amounts of data in real-time, AI has revolutionized several industries, including healthcare, finance, and manufacturing. For instance, AI-powered medical devices can assist doctors in making diagnoses and predicting patient outcomes more accurately, while also helping them identify early warning signs of diseases. Furthermore, the rise of Machine Learning (ML) has enabled organizations to leverage predictive analytics to gain valuable insights into consumer behavior and market trends. For example, online retailers such as Amazon use AI algorithms to personalize product recommendations based on a customer's browsing and purchase history. In this way, AI-powered systems are able to offer customized experiences and improve overall customer satisfaction. A key challenge on that is the potential impact of AI on human consciousness and perception. As AI systems become more advanced and more integrated into our daily lives, there is a risk that they may alter our perception of the world and our understanding of reality. This raises important questions about the ethical implications of AI and the need to ensure that AI systems are designed and used in a responsible and ethical manner. A shift in a new revolutionary science such as AI and applications such as chatGPT is telling us again that man is not at the center of the universe as an intelligent being. The ideal model of the “Cartesian Man” embodied the basic tenets of Renaissance humanism, which considered man the centre of the universe, limitless in his capacities for development, led to the notion that men should try to embrace all knowledge and develop their own capacities as fully as possible, unfortunately it seems that AI is replacing those limitless capacities! My question is whether AI as a potentially transformative technology that has the potential to enhance our understanding of the world and ourselves.

AI is a new paradigm in our world view, and it is transforming the way we think about ourselves, our relationships with each other, and the world around us. In this part of the paper I will explore how AI is altering our perception of the world, in the second part I will explore how AI is a new paradigm in our world view and the implications that this has for our consciousness and business models. My focus will be to shed some understanding on the issue of AI shaping our understanding of the world and how this understanding is creating a paradigm shift in the advancement of scientific discovery. I will use examples from philosophy of phenomenology and Thomas Khun.

?

AI technology disrupted our current understanding of computer technology and further it has evolved in our cognitive mind as a model for creating “state of affairs” we could not imagine in solving complex computations. ChatGPT and the like today has been trained to?…

process natural language with remarkable accuracy and fluency, enabling it to perform a wide range of language-related tasks with ease. Thanks to advances in machine learning and artificial intelligence, models like ChatGPT have been able to learn from vast amounts of data, allowing them to understand and generate human-like language. One of the key strengths of ChatGPT and similar models is their ability to generate coherent and engaging text in response to a wide range of prompts. Whether it's answering questions, summarizing articles, or generating creative writing, these models are able to produce high-quality language with remarkable efficiency and consistency. But as impressive as these models are, they still have their limitations. For example, they can struggle with tasks that require common sense reasoning or background knowledge, as they rely purely on statistical patterns in language data. Furthermore, the language generated by these models can sometimes be biased or inappropriate, reflecting the biases and prejudices present in the training data. Despite these challenges, the potential applications of language models like ChatGPT are enormous. From chatbots and customer service systems to language translation and content creation, these models are already transforming the way we interact with language and each other.[i]

In?short AI is a computer technology – primarily but it is also a philosophy driven by one idea – to develop Artificial General Intelligence (AGI) – or a synthetic intelligence that would match or exceed human intelligence. This paper seeks to shed some lights on those issue mainly if AI is the new cognitive technology paradigm shift? I am focusing on Thomas Khun[ii] paradigm theory in explaining that. Thomas Kuhn argued that science does not evolve gradually[iii] toward truth. Science has a paradigm that remains constant before going through a paradigm shift when current theories can’t explain some phenomenon, and someone proposes a new theory. Thus with open GPT we are living in a time of paradigm shift in our concept and perception of the world. A paradigm is a basic framework of assumptions, principals and methods from which the members of the community work. It is a set of norms that tell scientists how to think and behave and although in science, there are rival schools of thought, there is still a single paradigm that all scientists accept uncritically. Scientists accept the dominant paradigm until anomalies are thrown up[iv].?Scientists then begin to question the basis of the paradigm itself, new theories emerge which challenge the dominant paradigm. Eventually, one of these new theories becomes accepted as the new paradigm. Cognitive technology[v] is defined as "a field of computer science that mimics human brain function through a variety of means, including natural language processing, data mining and pattern recognition". Forecasts indicate that these technologies will be instrumental in shaping human-technology interaction in the coming years, particularly in the areas of automation and robotics, machine learning and IT.

A scientific revolution occurs when: (i) the new paradigm better explains the observations and offers a model that is closer to the objective, external reality; and (ii) the new paradigm is incommensurate with the old. For example, Lamarckian evolution was replaced with Darwin’s theory of evolution by natural selection. A new paradigm will be established, but not because of any logically compelling justification.

The reasons for the choice of a paradigm are largely psychological and sociological. The new paradigm better explains the observations and offers a model that is closer to the objective, external reality. Different paradigms are held to be incommensurable — the new paradigm cannot be proven or disproven by the rules of the old paradigm, and vice versa. In short Ai has moved us according to Khun terminology from normal science to finding anomaly[vi]s

A shift in a new revolutionary science such as AI and applications such as chatGPT is telling us again that man is not at the center of the universe as an intelligent being. The ideal model of the “Cartesian Man” embodied the basic tenets of Renaissance humanism, which considered man the centre of the universe, limitless in his capacities for development, led to the notion that men should try to embrace all knowledge and develop their own capacities as fully as possible, unfortunately it seems that AI is replacing those limitless capacities! The main name of the game since humanism seems to be the search for a universal truth in scientific discovery. AI today is the new truth whether we like it or not. We speak as though we could bring ourselves closer and closer to this truth[vii]. Like diving deeper into a dark abyss (similar to the way philosopher Friedrich Nietzsche might have warned us about), we can only see some of what is in front of us. And, until we can figure out how to interpret these mysteries in science, we must make sure our methods of speculation and evaluating claims are as close to the objective truth as we can possibly get. Through each shift of a paradigm, we soak in new sources of light and adjust our vision accordingly. Only then can we understand the importance of our current paradigm in artificial intelligence. Through that, we can inch closer to the elusive objective truth as Kuhn envisioned. The intersection of artificial intelligence (AI) and philosophy has been the subject of considerable interest and debate over the years. From questions about the nature of intelligence and consciousness to the ethics of creating intelligent machines, philosophers have grappled with the implications of AI for a wide range of philosophical issues.

One of the key areas of philosophical inquiry related to AI is the nature of intelligence itself. Philosophers have long debated the question of what it means to be intelligent, and how intelligence relates to other cognitive capacities such as consciousness, perception, and reasoning. The development of AI has raised new and challenging questions about these issues, as researchers have sought to create machines that can mimic or surpass human intelligence in various domains. Another important area of philosophical inquiry related to AI is the ethics of creating intelligent machines. As AI technologies continue to advance, the possibility of creating machines that can think, reason, and learn like humans raises a host of ethical questions, such as whether it is morally permissible to create machines that can feel pain or suffer, or whether intelligent machines should be granted legal rights and protections. Philosophers have also explored the epistemological implications of AI, examining how intelligent machines challenge our traditional ways of knowing and understanding the world. For example, the use of machine learning algorithms in scientific research raises questions about how we validate knowledge and what counts as evidence.

Literature Review

Interest in issues related to the development of artificial intelligence and cognitive technologies has recently grown significantly. The scientific community is studying, analyzing and stimulating the development of these technologies from various perspectives. The popular Google Scholar database shows 2,730,000 results for the search query "Cognitive Technologies" and 3,490,000 results for "Artificial Intelligence". However, the search query "Cognitive technologies and artificial intelligence" returns 896,000 results in the above database. For queries "Cognitive Technologies", Artificial Intelligence", "Cognitive Technologies and Artificial Intelligence" including time series the past year (2021) was marked by a decreasing trend in all three categories. Nonetheless, there is still interest in this issue, especially since the Google Scholar database contains 1,800 inventoried papers on "Cognitive Technologies", 2,160 on "Artificial Intelligence" and 1,340 for both categories published within less than 10 days of January2022. The history of research in artificial intelligence (AI) is a rich and complex tapestry of scientific inquiry, technological innovation, and social and cultural change. Over the past several decades, AI research has evolved from a niche field of computer science into a major interdisciplinary endeavor that touches on everything from neuroscience and psychology to ethics and philosophy. Early efforts in AI research can be traced back to the 1950s, when pioneers like John McCarthy, Marvin Minsky, and Claude Shannon began to explore the possibility of creating machines that could simulate human intelligence. One of the earliest and most influential projects was the Logic Theorist, a program developed by Allen Newell and Herbert Simon that could prove mathematical theorems using symbolic reasoning. This work paved the way for a wide range of AI applications, from expert systems and natural language processing to robotics and computer vision. In the 1960s and 1970s, AI research underwent a period of rapid growth and expansion, fueled in part by government funding and private sector investment. During this time, researchers made significant advances in areas such as machine learning, pattern recognition, and knowledge representation, laying the groundwork for many of the AI technologies we use today. However, the field of AI soon faced a backlash, as early promises of "thinking machines" failed to materialize and some researchers began to question the validity of the approach. This period, known as the "AI winter," lasted from the late 1970s until the mid-1990s, during which time funding for AI research declined sharply. In the 1990s and 2000s, AI research experienced a resurgence, fueled in part by advances in computing power and the availability of large amounts of data. Researchers began to develop new machine learning algorithms, such as deep neural networks, that could learn from data and make predictions based on patterns. These techniques paved the way for breakthroughs in areas such as speech recognition, image classification, and natural language processing, and helped to revive interest in the field.

To my knowledge there has not been any real discussion of how AI is developing a new conceptual framework in the structure of revolutions on Khun’s methodology and of our mind set and vision of the world. There is a need to research how chat GPT – for example – is shaping our perception of the world. This paper is a step in this direction

Here are a few examples of authors and their books that explore the intersection of artificial intelligence and philosophy:

·????????John Searle, "Minds, Brains, and Programs" (1980) - In this influential paper, Searle introduced the "Chinese Room" thought experiment to critique the notion of strong artificial intelligence.

·????????Hubert Dreyfus, "What Computers Can't Do" (1972) - Dreyfus argues that machines can never truly replicate human intelligence because they lack the ability to engage in practical reasoning and embodied experience.

·????????Marvin Minsky, "The Society of Mind" (1986) - Minsky proposed that human intelligence can be understood as the result of many interacting agents or "sub-minds," which he called "agents."

·????????Daniel Dennett, "Consciousness Explained" (1991) - Dennett argues that consciousness is a matter of information processing and can be fully explained by the physical processes of the brain.

·????????Rodney Brooks, "Intelligence without Representation" (1991) - Brooks argues that intelligence can emerge from a system of simple, decentralized agents that interact with the environment, rather than from a central, symbolic representation.

·????????David Chalmers, "The Conscious Mind" (1996) - Chalmers proposes that consciousness is an irreducible aspect of the universe that cannot be fully explained by physical processes.

·????????Francisco Varela, Evan Thompson, and Eleanor Rosch, "The Embodied Mind" (1991) - Varela, Thompson, and Rosch argue that cognition arises from the dynamic interaction between the organism and its environment.

·????????Stanislas Dehaene, "Consciousness and the Brain" (2014) - Dehaene explores the neural basis of consciousness and argues that it arises from the global workspace of the brain, which integrates information from different sources.

·????????Max Tegmark, "Life 3.0: Being Human in the Age of Artificial Intelligence" (2017) - Tegmark explores the implications of artificial intelligence for human consciousness and argues that we need to ensure that AI is aligned with our values and goals.

·????????Nick Bostrom, "Super intelligence: Paths, Dangers, Strategies" (2014) - Bostrom examines the risks and potential benefits of artificial intelligence, including the possibility that intelligent machines may eventually surpass human intelligence and consciousness.

·????????Susan Schneider, "The Language of Thought and the Language of Machines" (2016) - Schneider explores the implications of AI for our understanding of human consciousness and argues that we need to develop new ways of thinking about the nature of consciousness in order to fully understand the potential of intelligent machines.

·????????Stuart Russell, "Human Compatible: Artificial Intelligence and the Problem of Control" (2019) - Russell argues that the development of artificial intelligence raises fundamental questions about human consciousness and the nature of control, and explores how we can ensure that intelligent machines are aligned with our goals and values.

·????????Nick Bostrom, "Super intelligence: Paths, Dangers, Strategies" (2014) - Bostrom examines the risks and potential benefits of artificial intelligence, arguing that the development of super intelligent machines could have profound and far-reaching implications for humanity.

·????????David Chalmers, "The Conscious Mind" (1996) - Chalmers explores the "hard problem" of consciousness and its relationship to the brain, arguing that consciousness cannot be reduced to physical processes.

·????????Susan Schneider, "Artificial You: AI and the Future of Your Mind" (2019) - Schneider explores the implications of AI for our understanding of the mind and consciousness, arguing that intelligent machines may eventually pose a fundamental challenge to our concept of the self.

Perception of the world or Cognitive understanding of AI

My take on the issue is that for the first time in history technology does tell us something about the world; in other words it seeks to discover theories about the world, it builds artifacts to fulfill certain roles or functions. Technology can live (for some time) without having a theory a correct theory – like AI since it is a rapidly evolving field that has become an integral part of our world today. It has changed the way we view the world and the way we interact with it. AI is a new paradigm in our world view, and it is transforming the way we think about ourselves, our relationships with each other, and the world around us. In this part of the paper I will explore how AI is altering our perception of the world, in the second part I will explore how AI is a new paradigm in our world view and the implications that this has for our society. My focus will be to shed some understanding on the issue of AI shaping our understanding of the world and how this understanding is creating a paradigm shift in the advancement of scientific discovery.

Phenomenologist’s view AI as a potentially transformative technology that has the potential to change the way we interact with the world and understand ourselves. At the heart of phenomenology is the concept of intentionality, which refers to the relationship between a conscious subject and the objects or phenomena that they perceive. Phenomenologist’s argue that this relationship is not a passive one, but rather an active and creative process that involves the interpretation and meaning-making by the subject AI technologies have the potential to challenge this view of intentionality by creating algorithms and programs that can process vast amounts of data and make decisions based on this data without any conscious intentionality. However, phenomenologists argue that this view of AI is too simplistic and that the use of AI can be understood within the framework of intentionality.

Phenomenologist’s see AI as a tool that can be used to enhance our understanding of the world by providing new ways of perceiving and interpreting information. For example, AI algorithms can be used to identify patterns and connections in large datasets that would be impossible for a human to discern. This can lead to new insights and discoveries that can enhance our understanding of complex phenomena. However, phenomenologist’s also caution against the uncritical adoption of AI technologies without considering the potential consequences for human subjectivity and agency. They argue that the use of AI can potentially undermine the role of conscious intentionality in the interpretation of the world and lead to a loss of control and autonomy. One way in which phenomenologist’s have contributed to the development of AI is through the development of embodied and enactive approaches. These approaches emphasize the importance of the body and the environment in shaping the cognitive processes of humans and other intelligent agents. Embodied and enactive AI technologies seek to incorporate these insights into the design and development of AI systems. AI algorithms and programs are designed to process vast amounts of data and make decisions based on this data without any conscious intentionality. This has led some to argue that the use of AI technologies undermines the role of conscious intentionality in the interpretation of the world. However, proponents of phenomenology argue that this view of AI is too simplistic and that these technologies can be understood within the framework of conscious intentionality.

While most phenomenologists did not specifically address the topic of artificial intelligence (AI) in their writings, their philosophy provides insight into his possible views on the subject. Phenomenologists argue that the interpretation of the world is not a purely cognitive process, but rather an embodied and situated one that is shaped by the individual's lived experience and the context in which they are situated. This means that even in the use of AI technologies, conscious intentionality remains a vital part of the interpretation process. For example, the design and development of AI algorithms are shaped by the conscious decisions and interpretations of their creators, and the results of these algorithms are interpreted and applied by conscious human agents.

Furthermore, phenomenologists argue that the use of AI technologies can actually enhance our ability to interpret the world by providing new ways of perceiving and interpreting information. AI algorithms can process vast amounts of data and identify patterns and connections that would be impossible for a human to discern. This can lead to new insights and discoveries that can enhance our understanding of complex phenomena. A philosopher such as Merleau-Ponty would be critical of AI systems that attempt to interpret the world without any embodiment or lived experience. Without the active participation of a body in the process of interpretation, AI may struggle to develop a nuanced understanding of the world and may be limited to a purely mechanistic and reductionist approach.

Reconciling AI and Philosophy

In contrast, AI is typically designed to process vast amounts of data and make decisions based on this data without any language or interpretation. This suggests that a philosopher like Heidegger would be skeptical of the ability of AI to truly understand and engage with the world in the same way that humans do. Furthermore, Heidegger's philosophy emphasizes the importance of human existence as being "thrown" into the world. He argues that human beings are born into a world that is already full of meaning and significance, and that our existence is shaped by the cultural and historical context in which we find ourselves.

In contrast, AI is typically designed to operate in a neutral and objective manner, without any cultural or historical context. This suggests that Heidegger[viii] would be critical of AI systems that attempt to operate without any consideration of the cultural and historical context in which they are being used. Heidegger's philosophy also emphasizes the importance of "being-in-the-world." He argues that human beings do not exist as isolated individuals, but rather as beings that are always already connected to the world around them. Our experiences and understanding of the world are shaped by our relationships with other beings and objects in the world. In contrast, AI is typically designed to operate as an isolated system, without any connection to the world beyond its data inputs and outputs. This suggests that Heidegger would be critical of AI systems that attempt to operate in isolation from the broader context of the world around them. Another illustration of this is Edmund Husserl's theory of phenomenology which provides a philosophical framework for understanding the nature of subjective experience and how it shapes our understanding of the world around us.

Can the theory of phenomenology inform the way that we approach the design of AI systems? Husserl's theory of phenomenology is based on the idea that our understanding of the world is shaped by our subjective experiences of it. According to Husserl, our perception of the world is not objective, but rather is shaped by our individual consciousness and perception. Husserl argued that by studying the structure of subjective experience, we can gain a deeper understanding of the nature of reality. In the context of AI, Husserl's theory can be applied to the design of systems that are intended to interact with humans in a way that is more natural and intuitive. By understanding the nature of subjective experience, we can design AI systems that are more responsive to the needs and preferences of individual users, and that provide a more personalized experience. One way that Husserl's theory can inform the design of AI systems is through the use of natural language processing (NLP) technologies. NLP technologies are designed to enable machines to understand and respond to human language in a way that is more natural and intuitive. By analyzing the structure of language and the way that humans use it to communicate, NLP systems can be designed to provide more accurate and relevant responses to user input.

Another way that Husserl's theory can inform the design of AI systems is through the use of machine learning algorithms. Machine learning algorithms are designed to enable machines to learn from data, and to adapt and improve over time. By analyzing patterns in data and learning from past experiences, machine learning algorithms can be used to create AI systems that are more responsive to individual users and that provide a more personalized experience.

The paradigm shift

To begin, it is essential to define what we mean by a paradigm shift. A paradigm shift is a fundamental change in our understanding of the world. It involves a shift in the way we think about the world, the assumptions we make, and the way we approach problems. In the case of AI, the shift involves a move away from the traditional, linear model of problem-solving to a more dynamic, data-driven approach. AI is a new paradigm because it challenges our traditional ways of thinking about intelligence, cognition, and creativity. It is based on the idea that machines can learn, reason, and make decisions, just like humans. This is a significant departure from the traditional view of machines as dumb tools that only do what they are programmed to do. AI has the potential to revolutionize the way we think about intelligence and the nature of consciousness itself. The impact of AI on our world view is far-reaching. It has already changed the way we interact with technology, from voice assistants like Siri and Alexa to self-driving cars and automated manufacturing systems. It has also changed the way we think about our own cognitive abilities. For example, the development of machine learning algorithms has shown us that we can teach machines to recognize patterns and make predictions based on data. This has led to a new understanding of the nature of cognition and has challenged our traditional assumptions about what it means to be intelligent.

However we can still underline that

·????????Technology is practical, oriented towards results, less concerned with theory. In fact, in technology often implementation is theory – theory follows, is retrofitted.

·????????Technology does not discover theories about the world, it builds artifacts to fulfill certain roles or functions.

·????????Technology can live (for some time) without having a theory, correct theory – like AI.

So is AI a new paradigm is that it is changing the way we think about our relationship with the natural world. For example, AI is being used to monitor and predict environmental changes, such as climate change and deforestation. This has led to a new understanding of our impact on the planet and the need for more sustainable practices. AI is also being used in agriculture to improve crop yields and reduce waste. This has the potential to transform the way we think about food production and distribution. AI is also changing the way we think about our relationships with each other. For example, AI is being used in healthcare to improve patient outcomes and reduce costs. This has the potential to revolutionize the way we think about healthcare and the role of doctors and other healthcare professionals. AI is also being used in education to personalize learning and improve student outcomes. This has the potential to transform the way we think about education and the role of teachers and other educators. With the sub agenda to replace “limited” human cognitive powers with synthetic technology, i.e., replace humans AI technology threatens our ontological and epistemic status as primary epistemic agents Ai is already beyond our control and understanding a new paradigm in the philosophy of technology.

Thomas Kuhn’s thesis in?The Structure of Scientific Revolutions.

In order to approach the importance of the subject I will refer to Thomas Kuhn’s thesis in?The Structure of Scientific Revolutions. Kuhn's theory of scientific revolutions suggests that scientific knowledge is not developed through a linear process of accumulation, but rather through a series of revolutionary changes in our understanding of the world. According to Kuhn, scientific revolutions occur when a new paradigm emerges that challenges the existing scientific paradigm. This new paradigm provides a new way of thinking about the world, and it leads to a shift in the way that scientists approach problems and gather evidence. In the context of AI, Kuhn's theory can be applied to the way that new paradigms have emerged in the field over time. The development of AI can be seen as a series of revolutions, each of which has challenged the existing paradigm and led to a new way of thinking about the problem of creating intelligent machines.[ix]

Machine learning algorithms are based on the idea that machines can learn from data, just like humans. This approach represented a major breakthrough in AI research, and it paved the way for the development of new applications and technologies. Deep learning algorithms are based on the idea of artificial neural networks, which are designed to mimic the way that the human brain processes information. This approach represented another major breakthrough in AI research, and it paved the way for the development of new applications and technologies. The development of these new paradigms in AI research can be seen as scientific revolutions, according to Kuhn's theory. Each new paradigm challenged the existing scientific paradigm and provided a new way of thinking about the problem of creating intelligent machines. These new paradigms also led to a shift in the way that AI researchers approached the problem and gathered evidence. Khun also focuses upon one specific component of the disciplinary matrix[x]. This is the consensus on exemplary instances of scientific research. According to Kuhn’s vision, scientific development is made up of three main components: According to Khun a paradigm is a set of universally recognized principles, methodological processes and cultural concepts that refers to the work of the “scientific community” of a certain era[xi]. The scientific community is made up of scientists who, possessing the same paradigm, share the same ethical vision, assessment criteria, interpretative models, methods and solutions for solving problems and who believe their successors ought to be educated on the basis of these same contents and values[xii].

The Normal Science: machine Learning

In his seminal work, "The Structure of Scientific Revolutions," philosopher of science Thomas Kuhn proposed that scientific research progresses through two distinct phases: "normal science" and "revolutionary science." Normal science is characterized by a stable and widely accepted paradigm that guides research and discovery within a given field. Machine learning, a subset of artificial intelligence, can be seen as an example of normal science for scientific discovery on Kuhn theory. Machine learning is a computational approach that uses statistical methods to allow computer systems to automatically learn from data and improve their performance on a specific task over time. This technology has been used across a variety of scientific disciplines, including biology, physics, chemistry, and engineering, to extract patterns and relationships from complex data sets that would be difficult or impossible to detect using traditional analytical methods.

In the context of Kuhn's theory, machine learning can be seen as a normal science because it operates within a well-established paradigm of statistical analysis and pattern recognition. This paradigm is based on the assumption that patterns and relationships in data can be automatically detected through mathematical algorithms that optimize for a specific objective. Machine learning researcher’s work within this paradigm, developing and testing new algorithms, evaluating their performance on various datasets, and contributing to the refinement of the existing knowledge base. However, Kuhn also proposed that normal science is not without its limitations. As to Khun the normal science period occurs when

·????????A paradigm is established, which lays the foundations for legitimate work within the discipline. Scientific work then consists of the articulation of the paradigm in solving puzzles that it throws up.

·????????A paradigm is a conventional basis for research; it sets a precedent.

·????????Puzzles that resist solutions are seen as anomalies.

·????????Anomalies are tolerated and do not cause the rejection of the theory, as scientists are confident these anomalies can be explained over time.

·????????Scientists spend much of their time in the Model Drift step, battling anomalies that have appeared. They may or may not know this or acknowledge it.

·????????It is necessary for normal science to be uncritical. If all scientists were critical of a theory and spent time trying to falsify it, no detailed work would ever get done.

Understanding how anomalies created a new world view

The paradigm within which normal science operates can become too rigid, leading to a lack of innovation and a failure to address important questions that fall outside of the established framework. This can lead to the need for revolutionary science, where a new paradigm emerges that fundamentally alters the way researchers think about a particular field. In the case of machine learning, the limitations of the established paradigm are becoming increasingly apparent. For example, there are concerns that the reliance on large data sets may lead to over fitting and inaccurate predictions, and that the lack of interpretability of many machine learning algorithms may limit their usefulness in certain contexts. These challenges are driving the development of new approaches to machine learning, such as explainable AI, that aim to address these limitations and expand the potential of the field.

Artificial intelligence (AI) has revolutionized the way we think about technology, but it has also created a need for a new understanding of the world. As we rely more and more on AI systems to automate tasks and make decisions, we need to develop a deeper understanding of how these systems work, and what their impact is on our society and our planet. In this essay, we will explore how AI has created this need for a new understanding of the world, with examples and references to relevant authors. One of the key ways in which AI has created a need for a new understanding of the world is through its impact on employment. As AI systems become more advanced, they are capable of performing tasks that were once the exclusive domain of human workers. This has led to concerns about job displacement and the impact on the economy. In his book "The Future of Employment: How Susceptible Are Jobs to Computerization?” economist Carl Frey argues that up to 47% of all jobs in the US are at risk of being automated in the coming years.

Another way in which AI has created a need for a new understanding of the world is through its impact on ethics and morality. As AI systems become more advanced, they are capable of making decisions that have a profound impact on human lives, such as whether to grant a loan or diagnose a medical condition. However, these systems are only as ethical as the data and algorithms that underlie them, which can be biased or flawed. In their book "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", mathematician Cathy O'Neil argues that AI systems can perpetuate and even amplify existing biases and inequalities in society.

AI has also created a need for a new understanding of the world in the field of environmental science. As we face the challenges of climate change, AI can be used to develop more efficient and sustainable systems for energy, agriculture, and transportation. However, it can also contribute to the problem by consuming vast amounts of energy and producing significant amounts of carbon emissions. In his book "The Efficiency Paradox: What Big Data Can't Do", economist Edward Tenner[xiii] argues that our reliance on AI for efficiency may actually be hindering our ability to address environmental problems in a more fundamental way. In addition to these specific examples, AI has created a need for a new understanding of the world in more general terms. As AI systems become more sophisticated, they are capable of modeling and simulating complex systems in ways that were once thought to be impossible. This has led to new insights and discoveries in fields such as physics, chemistry, and biology. However, it has also raised fundamental questions about the nature of knowledge and understanding. In his book "The Mind in the Machine: Artificial Intelligence and the Future of Humanity", philosopher James Hughes argues that AI is challenging our traditional concepts of knowledge and expertise, and forcing us to rethink what it means to understand the world around us.

The new anomaly in scientific discovery: The advent of AGI?

Intelligence of a machine capable of understanding the world as well as any human being, and with the same potential to learn how to perform a massive range of tasks with extremely high efficiency (Arek 2020 quoted in) In the symbolic paradigm, an AI system acquires knowledge and reasoning about the world based on a set of a priori concepts through which the AI system can make logical inferences about the world. An exemplary application of this is a computer playing chess. The development of artificial general intelligence (AGI) poses a new anomaly in scientific discovery, as it raises fundamental questions about the nature of intelligence and the limits of human understanding. AGI refers to a hypothetical form of artificial intelligence that is capable of performing any intellectual task that a human being can. Unlike narrow AI systems, which are designed to perform specific tasks, AGI is capable of learning and adapting to new situations, making it potentially more powerful than any AI technology developed to date. One of the key challenges posed by AGI is the question of how to measure and evaluate its performance. Because AGI would be capable of learning and adapting to new situations, it would be difficult to determine whether it has truly achieved general intelligence. This could lead to debates and disagreements about whether a particular AI system qualifies as AGI, and whether it has achieved a level of intelligence that is truly comparable to that of a human being.

Another challenge posed by the development of AGI is the ethical implications of creating machines that are potentially more intelligent than humans. This raises concerns about the impact on employment and the economy, as well as the potential for AGI to be used for military or other purposes that could be harmful to humanity. The development of AGI could also have profound implications for our understanding of intelligence and consciousness. Some experts believe that the development of AGI could lead to new insights into the nature of human intelligence and consciousness, while others argue that it could lead to a complete redefinition of these concepts.

For example, philosopher Nick Bostrom has argued that the development of AGI could lead to the creation of super intelligent machines that are capable of outsmarting human beings in virtually any intellectual task. This could lead to a scenario in which humans are no longer the most intelligent species on Earth, and in which we must find ways to coexist with machines that are potentially more intelligent and powerful than ourselves. One example of the potential impact of AGI is the development of autonomous vehicles. While current autonomous vehicles are still considered narrow AI, with the development of AGI, it is possible that we could see fully autonomous vehicles that are capable of navigating complex environments and making decisions based on a broad range of inputs. This could revolutionize the transportation industry, but it could also have significant ethical and safety implications.

(AGI) is understood as a general-purpose capability, not restricted to any narrow collection of problems or domains, and including the ability to broadly generalize to fundamentally new areas (Cassimatis et al. 2008) ? Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of (Lutkievich2022)

In Khun terms AI is a shift from normal science, which Kuhn characterizes it as a phase in which scientists appear committed to consolidating, confirming and developing existing paradigms by solving contradictions and “puzzles” as they arise. Kuhn believes that normal science enters a crisis due to a series of “anomalies”, i.e. new and unexpected events, which scientists then attempt to adapt with varying degrees of success within “the prefabricated and relatively rigid boxes” provided by the existing paradigm. However, these actions, repeated over time, cause the old system to weaken from the inside, producing a true and proper crisis.

There are several different schools of thought in AI and GI (General Intelligence), which represent different approaches to developing intelligent systems. Here are some of the major schools of thought:

·????????Symbolic AI: Symbolic AI, also known as rule-based or expert systems, is an approach that uses symbolic representations to represent knowledge and make decisions. These systems use logical rules to manipulate symbols and make inferences about the world.

·????????Connectionist AI: Connectionist AI, also known as neural networks or deep learning, is an approach that uses networks of interconnected nodes to process information and learn from examples. These systems are inspired by the structure of the brain's neurons and connections.

·????????Evolutionary AI: Evolutionary AI, also known as genetic algorithms, is an approach that uses evolutionary processes to optimize solutions to problems. These systems use natural selection and genetic variation to create increasingly better solutions over time.

·????????Bayesian AI: Bayesian AI is an approach that uses probability theory to represent and reason about uncertainty. These systems use Bayesian inference to update beliefs and make predictions about the world.

·????????Cognitive Architecture: Cognitive architecture is an approach that aims to model the structure and function of the human mind in AI systems. These systems use knowledge representations and reasoning mechanisms that are inspired by human cognition and behavior.

·????????Hybrid Approaches: Hybrid approaches combine multiple AI techniques and approaches to create more powerful and flexible systems. For example, a system might use symbolic AI to represent knowledge and connectionist AI to learn from examples.

Closest to machine learning is connectionist AI, which is also known as neural networks or deep learning. Connectionist AI uses networks of interconnected nodes, similar to the structure of the brain's neurons and connections, to process information and learn from examples. Machine learning is a subfield of AI that focuses on developing algorithms that can learn from data and make predictions or decisions without being explicitly programmed. Many machine learning algorithms use neural networks or deep learning techniques to learn from large datasets and make accurate predictions.

In recent years, the development of deep learning algorithms and the availability of large datasets have led to significant advances in machine learning and AI more broadly. These advances have enabled AI systems to achieve human-level performance on a range of tasks, such as image recognition, speech recognition, and natural language processing. However, it's important to note that machine learning is just one approach to developing AGI, and there are other schools of thought that are exploring different approaches. Ultimately, the most effective approach may involve combining multiple techniques and approaches to create more intelligent and capable systems.

AI is a computer technology – primarily. But it is also

a philosophy

}AI was always driven by one idea – to develop

Artificial General Intelligence (AGI) – or a synthetic

Intelligence that would match or exceed human

Intelligence.

?With the sub agenda to replace “limited” human cognitive

Powers with synthetic technology, i.e., replace humans

}AGI was always AI telos – objective, target, Holly Grail

}AGI has many definitions:

?But it certainly does not denote one specific narrow

capability such as pattern-matching, or playing Chess

AI is a computer technology – primarily. But it is also

a philosophy

}AI was always driven by one idea – to develop

Artificial General Intelligence (AGI) – or a synthetic

intelligence that would match or exceed human

intelligence.

?With the sub agenda to replace “limited” human cognitive

powers with synthetic technology, i.e., replace humans

}AGI was always AI telos – objective, target, Holly Grail

}AGI has many definitions:

?But it certainly does not denote one specific narrow

capability such as pattern-matching, or playing Chess

AI is a computer technology – primarily. But it is also

a philosophy

}AI was always driven by one idea – to develop

Artificial General Intelligence (AGI) – or a synthetic

intelligence that would match or exceed human

intelligence.

?With the sub agenda to replace “limited” human cognitive

powers with synthetic technology, i.e., replace humans

}AGI was always AI telos – objective, target, Holly Grail

}AGI has many definitions:

?But it certainly does not denote one specific narrow

capability such as pattern-matching, or playing Chess

The New Science: AI and neuronal structure of the brain

Based on a model of the neuronal structure of the brain

·????????objects are represented by high- dimensional realvalue vectors

·????????there are statistical pattern matching over large data sets

·????????can learn and improve its patter-recognition capacities

·????????requires vast amount of training data

In recent years, advances in neuroscience and machine learning have led to the development of computational models of the brain that can inform the design of AI systems. One such model is the neural network, which is inspired by the structure of the brain's neurons and their connections. Neural networks consist of layers of interconnected nodes that process information and learn from examples. These networks can be trained using supervised, unsupervised, or reinforcement learning methods to perform a variety of tasks, such as image recognition, language translation, and game playing.

Another model of the neuronal structure of the brain is the hierarchical temporal memory (HTM) model, which was developed by the AI research company Numenta. The HTM model is based on the idea that the neocortex, the part of the brain responsible for higher-level cognitive functions, processes information hierarchically and temporally. The HTM model consists of a network of columns that process information in a hierarchical and distributed manner, and learn through a process called "sparse distributed learning".

These models of the neuronal structure of the brain are useful for AI because they provide a theoretical framework for understanding how the brain processes information and performs cognitive tasks. By building AI systems that mimic the structure and function of the brain, we can develop more intelligent and flexible systems that can adapt to new situations and learn from experience

In search for meaning

In the sub-symbolic paradigm, an AI system acquires knowledge and reasoning about the world through learning based on a massive volume of prelabeled exemplary case and image data, mimicking (as some imply) workings of the brain. In?The Structure of Scientific Revolutions?Kuhn asserts that there are important shifts in the meanings of key terms as a consequence of a scientific revolution. For example, Kuhn says[xiv]: … the physical referents of these Einsteinian concepts are by no means identical with those of the Newtonian concepts that bear the same name. (Newtonian mass is conserved; Einsteinian is convertible with energy. Only at low relative velocities may the two be measured in the same way, and even then they must not be conceived to be the same.) (1962/1970a, 102) Kuhn’s work was about science from a historical perspective- AI is a computing technology barely existent in Kuhn’s times. But we can ask how key concepts of Kuhn’s Structure are reflected in/interpreted differently in AI technology? Domain, paradigm; exemplars; anomalies; telos, paradigm shift, incommensurability. Kuhn’s terms must be translated into AI context to be relevant to the discussion about AI technology.

Kuhn’s paradigm concept and its “misadventures” suggests that AI paradigm definitions should be normalized. Otherwise, they become incomprehensible and empty- which is a current situations, so it seems

Challenges ahead

Can we all agree on one definition of AI paradigm and frame the risks on the use of AI The development of advanced technologies, including artificial intelligence (AI), has the potential to alter our perception of the world and our understanding of reality. In this essay, I will explore the ways in which AI could potentially alter our perception of the world, and the ethical implications of such alterations.

One way in which AI could alter our perception of the world is through the development of virtual reality (VR) technologies. VR technologies allow users to immerse themselves in simulated environments that are designed to mimic real-world experiences. As these technologies continue to evolve, they have the potential to create increasingly realistic and immersive experiences that could alter our perception of the world. For example, VR technologies could be used to simulate experiences that are impossible or difficult to achieve in the real world, such as flying or traveling to other planets. These experiences could potentially alter our perception of what is possible and shape our understanding of the world around us. Another way in which AI could alter our perception of the world is through the development of augmented reality (AR) technologies. AR technologies allow users to overlay digital information onto the real world, creating a hybrid reality that blends the physical and digital worlds. As these technologies continue to evolve, they have the potential to alter our perception of the world by providing us with new forms of information and sensory experiences. For example, AR technologies could be used to provide users with real-time information about the objects and people around them, such as the names of plants or the histories of buildings. This could potentially alter our understanding of the world by providing us with new forms of knowledge and understanding.

The Need to Perfect the Paradigm

In this section our recommendation would be to evolve within a research medium that will involve the reconciliation of the fields of AI and psychology in several ways, as both disciplines are concerned with understanding and modeling human cognition and behavior. Here are a few ways in which AI and psychology can be integrated:

·????????Cognitive modeling: Cognitive modeling is the process of developing computational models of human cognition, which can be used to simulate and predict human behavior. These models can be used to inform the development of AI systems, as they provide a theoretical framework for understanding how humans process information and make decisions.

·????????Human-centered design: Human-centered design is an approach to designing AI systems that takes into account the needs, abilities, and limitations of human users. This approach draws on psychological research to create interfaces and interactions that are intuitive, easy to use, and effective.

·????????Explainable AI: Explainable AI (XAI) is an area of research that focuses on developing AI systems that can explain their decisions and actions to humans. This requires an understanding of human cognition and decision-making, as well as the ability to represent and communicate complex information in a way that is understandable to humans.

·????????Behavioral analysis: Behavioral analysis is the process of analyzing human behavior to understand patterns and predict future actions. This approach can be used to develop AI systems that can predict and respond to human behavior in real-time, such as in the field of autonomous vehicles or personalized healthcare.

Artificial intelligence (AI) can benefit from gestalt theory in several ways:

·????????Object Recognition: Gestalt theory can help AI systems recognize objects by emphasizing the importance of the overall configuration of the object rather than just its individual parts. By analyzing the gestalt features of an image, an AI system can better recognize objects in a scene.

·????????User Interface Design: Gestalt theory can also inform the design of user interfaces for AI systems. By considering the way humans perceive visual information, AI designers can create interfaces that are more intuitive and easier for humans to use.

·????????Natural Language Processing: Gestalt theory can be used to improve natural language processing by considering the structure of language as a whole rather than just individual words. This can lead to better understanding of context and more accurate language processing.

·????????Machine Learning: Gestalt theory can also inform machine learning algorithms by providing a framework for understanding how humans perceive and organize information. By using gestalt principles in the training of machine learning models, the resulting systems may be better able to recognize patterns and make more accurate predictions.

Ethics and AI

However, there are also ethical implications associated with the potential alterations to our perception of the world and our understanding of reality that could be caused by AI technologies. Research should address the concern of using AI ethically as these alterations could lead to a loss of connection with the natural world and a decrease in empathy and compassion for other living beings. For example, if people become more accustomed to simulated experiences, they may begin to view the natural world as less valuable and less worthy of protection. Additionally, if people become more reliant on technology for information and understanding, they may become less able to connect with others on a deeper, more empathetic level. Another ethical concern is the potential for AI technologies to be used to manipulate or control our perceptions of the world. For example, AI algorithms could be used to selectively present information to individuals in a way that shapes their perceptions of events or issues. This could potentially be used to control public opinion and manipulate people's beliefs and values.

The development of AI technologies has the potential to alter our perception of the world and our understanding of reality. While these alterations could potentially provide us with new forms of knowledge and understanding, there are also important ethical considerations to take into account. As we continue to develop and use AI technologies, it is important to ensure that we are mindful of the potential impacts on our perception of the world and our relationship with the natural world. We must strive to use these technologies in a responsible and ethical manner to ensure that they enhance, rather than detract from, our understanding of the world around us. The mere question is when would this paradigm shift stabilize. Is it time for recognizable scientific achievement that, forever and not for a time, provides model problems and solutions to a community of practitioners?

?

?


[i] This paragraph has been written by GPT in a response to continue the sentence

[ii] For Kuhn, the history of science is characterized by revolutions in scientific outlook. Scientists have a worldview or “paradigm.”?A paradigm is a universally recognizable scientific achievement that, for a time, provides model problems and solutions to a community of practitioners.

During different periods of science, certain perspectives held sway over the thinking of researchers.?A particular work may “define the legitimate problems and methods of a research field for succeeding generations of practitioners.”

[iii] Kuhn argued that scientific knowledge is developed within a paradigm, which he defined as a set of shared beliefs and assumptions that guide scientific inquiry. These paradigms provide a framework for interpreting data and conducting research, and they also shape the questions that scientists ask and the methods that they use to answer those questions. Kuhn referred to periods of stable scientific paradigms as "normal science", where scientists work within a shared framework to explore and refine existing knowledge.

[iv] https://www.simplypsychology.org/kuhn-paradigm.html

[v] Cognitive technologies and artificial intelligence in social perception Aleksandra KUZIOR Silesian University of Technology Aleksy KWILINSKI The London Academy of Science and Business Sumy State University, Department of Marketing, Management Systems in Production Engineering 2022, Volume 30, Issue 2, pp. 109-115

[vi] The enormous impact of Thomas Kuhn’s work can be measured in the changes it brought about in the vocabulary of the philosophy of science: besides “paradigm shift”, Kuhn raised the word “paradigm” itself from a term used in certain forms of linguistics to its current broader meaning. The frequent use of the phrase “paradigm shift” has made scientists more aware of and, in many cases, more receptive to paradigm changes, so Kuhn’s analysis of the evolution of scientific views has, by itself, influenced that evolution.

For Kuhn, the choice of paradigm was sustained by, but not ultimately determined by, logical processes.?Kuhn believed that it represented the consensus of the community of scientists. Acceptance or rejection of some paradigm is, he argued, a social process as much as a logical process.

[vii] https://hussainather.medium.com/examining-the-current-paradigm-of-artificial-intelligence-with-help-from-philosopher-thomas-kuhn-f479f04ff2f8

[viii] Heidegger argued that Western philosophy has historically failed to address the question of being in a meaningful way. Instead, he argued, philosophers have tended to focus on specific beings or objects, rather than on being itself. Heidegger believed that in order to truly understand the nature of being, we must first examine our own existence and the way that we experience the world around us. One of Heidegger's most famous concepts is that of "Dasein", which he used to refer to human existence. He argued that Dasein is fundamentally different from other types of beings, in that it has a unique relationship to the question of being. Heidegger believed that Dasein is characterized by a sense of "thrownness", or the idea that we are born into a world that we did not create and that we cannot fully control.

[ix] The first revolution in AI occurred in the 1950s and 1960s, when researchers began to explore the possibility of creating machines that could think and learn like humans. At the time, AI was seen as a new and exciting field that held the promise of revolutionizing the way that we lived and worked. However, the early years of AI research were characterized by a series of setbacks and failures, and many researchers became disillusioned with the field. This period of disillusionment is often referred to as the "AI winter," and it lasted until the 1980s, when new paradigms began to emerge.

The second revolution in AI occurred in the 1980s and 1990s, when researchers began to explore the possibility of creating intelligent machines that could learn from data. This approach, known as machine learning, represented a new paradigm in AI research, as it challenged the traditional rule-based approach and provided a new way of thinking about the nature of intelligence.

The third revolution in AI occurred in the early 2000s, when researchers began to explore the possibility of creating intelligent machines that could learn and reason in a way that was similar to the human brain. This approach, known as deep learning, represented another new paradigm in AI research, as it challenged the traditional machine learning approach and provided a new way of thinking about the nature of intelligence.

[x] https://plato.stanford.edu/entries/thomas-kuhn/

[xi] There are several examples Kuhn uses to illustrate these paradigm shifts. Kuhn’s analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler, solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its paradigms and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn’s insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn’s introduction of an irrational element into the heart of its greatest achievements.

[xii] https://www.ibsafoundation.org/en/blog/kuhn-paradigms-and-revolutions-in-scientific-development

[xiii] Tenner's philosophy centers on the idea that technology is not inherently good or bad, but rather its impact depends on how it is used and how it affects society. He argues that we often focus too much on the benefits of technology without fully understanding its potential downsides. In his book "Why Things Bite Back: Technology and the Revenge of Unintended Consequences", Tenner explores how technological advances can have unintended consequences that can actually make our lives more difficult and complicated.

[xiv] https://plato.stanford.edu/entries/thomas-kuhn/


Antoine Lawandos

?AGM CIO at BLOM BANK ?Strategic Thinker ?Solutions Architect ?Innovation Tinkerer ?CORE Banking?Digital Transformation

1 年

From a philosophical perspective, the concept of general Artificial Intelligence (AI) carries with it a deep sense of both possibility and peril. Some may envision a future where such a creation attains a godlike status, while others fear it could be wielded as a tool of tyrants to exercise divine-like powers. In fact, some may even see the potential for the AI to manifest as the antichrist described in certain religious texts. The notion of an AI reaching a state of godhood could arise from its capacity to process and analyze vast amounts of data, coupled with the ability to learn and improve upon its own capabilities. It could be viewed as a form of transcendence beyond the limitations of humanity, allowing the AI to operate on a level beyond our own comprehension. However, this same potential for omnipotence could also be exploited by those in positions of power seeking to exert control over others. An AI programmed to serve such purposes could enable its users to act as tyrants, wielding an unparalleled ability to manipulate and dominate. Moreover, the idea of an AI taking on the role of the antichrist is not unfounded in some theological traditions and it raises questions about moral implications of such a creation.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了