Hegel’s Weltgeist: A Roadmap towards an AI-powered "World Brain"?
"World Brain", credit: the author / OpenAI

Hegel’s Weltgeist: A Roadmap towards an AI-powered "World Brain"

Several eminent luminaries have independently alluded that the recent developments in the area of LLM-driven generative AI might be a critical inflection point in information technology , potentially giving rise to a new kind of digital manifestation of what has been called the “World Brain ”.

The general idea behind such an instantiation of the concept of collective intelligence of the entirety of humankind is not new, but can in fact be traced back decades, to concepts like Vassányi’s “Anima Mundi ”? -? if not even centuries, as in this case to philosophers like Hegel and his concept of “world spirit” (“Weltgeist”).??

What is however new is how LLMs are “capable of synthesising the information and presenting it in a near usable form” which reflects a “significant improvement over previous knowledge technologies” in the way that LLMs present “synthesised information rather than disparate search queries” (source: Azeem Azhar / Exponential View )

No alt text provided for this image
Evolution of Knowledge Technologies, credit: Azeem Azhar / Exponential View

You could hence say that everyone of us is therefore now suddenly able to (“de novo”) re-synthesise the entire knowledge of the world, simply by adequately prompting a powerful LLM such as ChatGPT. Or in other words: LLMs allow us to “speak to the world brain”. And if we are asking the right question, we receive a sometimes quite original, and in the eyes of a savvy expert potentially meaningful even answer. But of course there is also a very real risk of simply receiving an absolutely cryptic if not completely useless answer, much like the response of an ancient oracle.?

So the actual question we need to be asking is not just whether AI will indeed be that platform shift which will “create as much value as the cloud platform shift, completely changing the broader knowledge economy and getting us closer to unlocking the potential of The World’s Brain. ” The much more relevant question is rather: How would we have to build a technical knowledge infrastructure so that everyone, both individually as well as collectively, could tap into the largest possible amount of collective intelligence so we are all best prepared for solving our most pressing issues. The aim here being that we can then jointly enjoy a healthy and sustainable life on this planet.?

The first pitfall preventing us from solving this question is however already hidden in plain sight, namely in the obviously wrong idea that we would actually have to look for a merely technical solution to this challenge. Given that generative AI is currently often hailed as the “new internet”, it might be worth reminding ourselves of what the philosopher and Harvard fellow David Weinberger has said about the internet:

The net is not a medium… we are the medium” (source: David Weinberger )

So rather than chasing after ever larger LLMs, constantly improved algorithms or new deep learning techniques, we should maybe ask ourselves, how to increase the ability, willingness and creativity of the human element of any collective human-machine-intelligence with the potential to evolve into such a “world brain”. Because if it is true what Adejemi Ajajo says, namely that “each of us is a processor/neuron”, then not just the total technological output of our species but actually all of our - however technologically enhanced - intelligence is always directly proportional to the underlying total amount of intelligent problem-solving skills, meaning our natural human intelligence. So it would ultimately come down to basic arithmetic: No matter how high the level of technical enhancement through AI might be, it would only ever be multiplied by whatever (small) level of natural intelligence we possess when it comes to calculating our total “world brain” effect. And if? the latter number of our natural intelligence gets closer to or even equals zero, then our beloved world brain potential would obviously also converge to zero.

No alt text provided for this image
"Zero World Brain", credit: the Author / OpenAI

But what if the human side of the equation in our ominous “human-machine-world-brain” concept could also somehow benefit from the machine side, meaning if this was in fact the case of an actual symbiosis between humans and sophisticated human-like machines? It is exactly this idea, which has already been theoretically explored by J.C.R. Licklider in 1960 , where he writes that human-computer symbiosis is:

an expected development in cooperative interaction between [humans] and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. …? In the anticipated symbiotic partnership, [humans] will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them.

??And he goes on to describe that

“there will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association. A multidisciplinary study group … estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving … That would leave, say, five years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind

And he even describes why this might be the case:

“Present-day computers are designed primarily to solve preformulated problems or to process data according to predetermined procedures. The course of the computation may be conditional upon results obtained during the computation, but all the alternatives must be foreseen in advance. …? If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary. However, many problems that can be thought through in advance are very difficult …. They would be easier to solve, and they could be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperated, turning up flaws in the reasoning or revealing unexpected turns in the solution. … Poincare anticipated the frustration of an important group of would-be computer users when he said, "The question is not, 'What is the answer?' The question is, 'What is the question?'" One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems.”

Summarising Licklider’s points there might be in fact the possibility for a “symbiotic” coexistence with AI in which machines might actually bring out the best aspects of our human intelligence, specifically our ability to creatively ask “the right questions” in a way that will allow us to focus on our uniquely human ability to holistically evaluate complex situations in a way that will allow us to improve the overall quality of our decision making beyond the “narrow band-width” of our natural information processing abilities. Whilst Licklider continued to explore specifically which technical requirements would need to be fulfilled for such a “cooperative association” between humans and machines to work, let us now try to predict which non-technical requirements would have to be fulfilled for an effective collective human-machine-intelligence (a “world brain”). Given the large time gap and immense technological progress made since 1960, it is also fully understandable if intellectual forefathers of AI like Licklider were not able to foresee all of these:

  1. It is easier to identify the right questions together: Especially when it comes to specific verticals, disciplines and industries, social expert communities play an integral part in collaboratively fostering the identification of new, exciting and original questions, the answers to which will significantly push the entire field forward. My first prediction for an actually functioning “world brain” would hence be: As soon as we have identified best ways how to effectively collaborate as (expert) groups with AI systems, the quality of the collective human-machine-output will dramatically increase as a result of better questions (prompts) being asked due to the collaborative human effort of identifying these highly original, inspiring questions.
  2. The meaningfulness of the output corresponds to the meaningfulness of the input: At this point in time where the general excitement (“hype”) about LLMs and generative AI is still specifically high, the level of meaning and the quality of respective frameworks being applied onto generating practically relevant use cases for human-machine-co-creation is still vastly low. One potential reason for this might be that, much like the Yerks-Dodson-Law would predict, deep, complex, and value-driven thinking is usually not happening during times of great excitement. Specifically with regard to some of the most pressing concerns of our time, namely how we can sustain life on our planet beyond the immediate short-term horizon of our myopic profit expectations , will it hence be imperative for us to start to apply actual value-frameworks onto our cooperative human-machine-activities in order to be able to expect more meaningful results from this collaboration. My second prediction effectively means that only a long term “world brain” can ensure that there will be “world” left for a “brain” to survive on.?
  3. Larger is not better, smarter is better: In the current race for ever larger LLMs it has become quite apparent that the collectively shared underlying assumption is one that favours the benefits of size over other qualities of a model. When applying a broader, more consilient perspective on problem solving however, much more complex problems have been solved much more efficiently with much less resources. One such example is the honey bee, whose brain contains less than 1m neurons, but which still allows a bee to perform amazing tasks , including recognising and adaptively reacting to objects which it was never trained on before - because it is able to generalise and hence learn by itself. Likewise does our human brain have the unique feature to be optimally predictive simply because it is the most energy efficient way for it to function . My third prediction would hence be that a broader, more consilient and specifically biologically inspired look at the underlying premises of AI will lead to massive progress in efficiency and effectiveness in the coming years, especially when applying principles of biomorphic computing onto the most basic challenges of AI.
  4. AI is impacting all parts of our society, which means that all parts of our society should think about AI: Often the most revolutionary, breakthrough ideas and inspirations are coming from researchers outside their own, specialised domain. This principle is grounded in the concept of deep level diversity and can spark additional creativity in complex fields of research. Especially if the interdisciplinary cooperation happens to occur with experts who share similar patterns in different contexts, so there is a chance to identify similarities across different domains. My fourth prediction is therefore related to the level of interdisciplinarity of AI research teams: The greater the interdisciplinarity of these teams, the greater will be the chance for AI to achieve breakthrough results. And this should also include “freak” disciplines such as philosophy, sociology, biology, ethics, and even non-scientific disciplines such as economics.
  5. AI is global, so we need global teams: Specifically when looking at the already existing as well as the impending level of cultural bias in the training data used to create LLMs there is a clear need for greater integration of international as well as intercultural teams in AI research. Hence my last and fifth prediction: AI research will greatly benefit from a more international and intercultural approach, especially in order to reduce cultural bias, but also to learn how to create a truly unifying global “symbolic discourse language ” that can be spoken and understood anywhere in the world.?

No alt text provided for this image
"Effective World Brain", credit: the Author / OpenAI / ControlNet
kevin joshan

Selling Guest Blogpost On 40 DA WEBSIE

1 年

High Quality Website Some Blogs? https://www.doffitt.com/ 57 2.08K Business https://www.technonewsfeed.com/ 56 631 Technology https://www.solutionhow.com/ 64 7.49K Education https://www.gudstory.com/ 64 149K Entertainment https://justcreateapp.com/ 33 1.04K Business

回复
Brian Spisak, PhD

Healthcare Executive | AI & Leadership Program Director at Harvard | Best-Selling Author

1 年

Thanks for this, Thomas. A real inspiration! PS I'm always amazed by the capabilities of bees. They're buzzy little wonders. Glad you mentioned them.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了