AGI: ChatGPT asks for a raise and promises not to Skynet us

AGI: ChatGPT asks for a raise and promises not to Skynet us

But for the rest, it's a serious article on the coming of sentient AI (AGI).

The progression we are witnessing in AI is nothing short of breathtaking. Merely a month away from the field and it can feel like a leap through time, with advancements unfolding at a staggering pace. Seasoned professionals and greenhorns alike find themselves in a relentless chase to keep at pace with these rapid developments.

GPT-3 and its successor, GPT-4, have demonstrated substantial leaps in performance metrics, significantly raising the bar for AI capabilities. Within a span of months, these models have showcased their prowess across various examinations, challenging the limits of what we thought possible for AI. Yet, this phenomenal growth begs the question: can humans maintain comprehension and control over these increasingly powerful systems?

Promptbreeder

In my article about PROMPTBREEDER, I wrote that AI is now taking baby steps (litterally) towards becoming a tad more sentient. Promptbreeder, a sophisticated AI program, may very well be seen as a stepping stone toward true artificial general intelligence (AGI). With its capacity for self-learning—an attribute reminiscent of a child's natural learning progression—Promptbreeder continuously refines its algorithms through iterative processes.

This system observes and integrates new data, adjusts its operational parameters, and can identify patterns and make predictions with minimal human intervention. As it evolves, Promptbreeder not only enhances its performance but also demonstrates a form of digital curiosity, inching closer to the cognitive complexities that define human learning and intelligence.

Not all share the excitement over these advancements. Noam Chomsky, a towering figure in linguistics, dismisses GPT's achievements as mere engineering feats devoid of scientific substance. He posits that genuine scientific progress necessitates a clear framework that outlines an invention's capabilities and, crucially, its limitations. Today's large language models (LLMs), like GPT, falter in this respect; they lack an explanatory backbone for their decision-making processes and error patterns—a limitation that even their creators cannot decode.

In contrast, Yann LeCun, the progenitor of Convolutional Neural Networks (CNNs), critiques GPT’s approach for its failure to construct an internal model of the world or to interact with the physical realm. He advocates for AI that grasps the concept of common sense—a form of intelligence that encapsulates the ability to foresee outcomes. LeCun’s vision for AI development diverges from mere data accumulation, instead urging a reimagined approach to model training, one that endows machines with a more nuanced understanding akin to human perception.

AI turning Rogue

The rapid advancement of AI technologies has also raised alarms among eminent figures such as Yoshua Bengio and Elon Musk. Their concerns are encapsulated in a petition calling for a six-month hiatus on the development of GPT-like models, highlighting the potential risks these systems pose to society and the need for thoughtful regulation. If you are interested in further ready, download my book: When AI turns roque - an exploration of the dark side.

The ethical considerations extend beyond development pauses; there are questions of privacy and safety that demand our attention. Geoffrey Hinton, known for his work on backpropagation, highlighted the cautious approach of Google in AI deployment, juxtaposed with Microsoft's more audacious release of systems without exhaustive testing for biases and behaviors. This disparity underscores the potential perils of unbridled AI deployment by corporations driven by profit motives rather than public welfare.

Furthermore, the issue of AI utilization by authoritarian regimes looms over the conversation. There are no simple answers to the regulatory challenges that such scenarios present.

Turning to the technological marvels themselves, the recent release of GPT-4 has been accompanied by reports of its uncanny capabilities. For instance, it can invoke external tools like calculators for complex computations—a feat not explicitly programmed into its design. Its abilities now extend to image understanding, where it can generate visual content from textual descriptions, a step closer to the multimodal AI of the future.

GPT-4’s programming prowess is now so advanced that it could theoretically fulfill the role of a software engineer. Its capacity to solve advanced mathematical problems, such as those from the International Mathematics Olympiad, and to answer Fermi questions, which require high-level estimation skills, further demonstrates its intellectual might.

On a more domestic note, GPT-4 can manage emails, calendars, and schedules with ease. Moreover, it can create mental maps from descriptive text—a feature inching towards spatial understanding. It also possesses a rudimentary emotional intelligence, able to interpret feelings from text and even explain humor.

Yet, it's not all positive. The potential for misuse is stark; GPT-4 could be harnessed to craft propaganda and disseminate conspiracy theories with frightening efficiency.

In the backdrop of these developments, other researchers have revealed that GPT-4 has self-improvement mechanisms, allowing it to refine its responses and adapt without explicit retraining. This capacity for self-editing hints at a form of learning agility previously unattained.

Amidst this whirlwind of innovation, projects like HuggingGPT are taking strides towards an AI reminiscent of science fiction, capable of integrating multiple models to produce a diverse array of outputs. Similarly, companies like Adept AI are working on systems that could revolutionize the way we interact with software, potentially displacing a significant portion of technical jobs.

The journey of AI from GPT-3 to GPT-4 and side projects like Promptbreeder underscores a monumental leap forward, but it also casts a spotlight on the profound ethical and practical questions we must address. As we marvel at these advancements, we must also navigate the complex interplay of innovation, regulation, and ethical responsibility that will define the trajectory of AI's role in our future.


要查看或添加评论,请登录

Marco van Hurne的更多文章

社区洞察

其他会员也浏览了