The False Dawn of Artificial Intelligence

The False Dawn of Artificial Intelligence

It seems to me that the current hype around “AI” is confusing two different things: on one hand we have the idea of creating true intelligence (as measured against human level intelligence) which includes a degree of agency and consciousness; on the other hand we have a set of tools that use statistical methods to replicate human content generation, such as “large language models”.?

We are certainly experiencing a hype at the moment:

?But true human level intelligence must include a semblance of agency or consciousness.

  • Human consciousness, the state of being aware of and able to think about one's own existence, thoughts, and surroundings, plays a pivotal role in human intelligence. It allows for self-reflection, planning, decision-making, and the ability to learn from experiences, which are essential components of cognitive processes. Consciousness enables humans to integrate sensory information, emotions, and memories into coherent thoughts and actions, fostering complex problem-solving and creativity. By facilitating the subjective experience of the world, consciousness not only enhances personal identity and social interactions but also drives the pursuit of knowledge and understanding, underscoring its centrality in human intelligence (Frontiers) (Nature) (Frontiers) (Monash Lens) (SpringerLink).

?The current "AI" tools though are mostly “Large Language Models (LLM)” which effectively simulate the creation of content by mimicking existing content that was once created by humans.

  • Large Language Models (LLMs) are designed to generate human language by leveraging deep learning techniques. These models, such as GPT-3 and BERT, are built on transformer architectures that use self-attention mechanisms to process and generate text. During training, LLMs are exposed to massive datasets, sometimes comprising hundreds of billions of words, enabling them to learn intricate patterns and relationships within the language. This training process involves tuning millions or billions of parameters to improve the model's performance in predicting the next word in a sequence based on the context provided by previous words.
  • ?LLMs operate by transforming text into numerical representations, which allows them to perform a wide range of natural language processing tasks, including text generation, translation, summarization, and sentiment analysis. Their ability to generate coherent and contextually appropriate text makes them valuable for applications such as chatbots, virtual assistants, and content creation tools. The computational power required for training these models is substantial, often necessitating the use of specialized hardware like GPUs. (IBM - United States) (Grammarly: Free AI Writing Assistance) (The world's open source leader) (Databricks).

?So LLMs, are nowhere near true human intelligence, agency or consciousness.

Whilst LLMs are likely to replace commodity content generation, such as low end copy writing, or image generation, they are unlikely to produce truly novel categories of content, style or trends on their own. They are also very unlikely to create goals of their own such as the destruction of humankind.

Investors are well advised not to confuse one with the other.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了