AI Model Optimisation: Breaking a Self-Referential Paradigm
Practically all current AI applications are based on the ability of large (language) models to identify and “learn” complex, non-linear patterns. Integral part of this ability is the mathematical analogy to the biological phenomenon of the action potential, which is defined as?
“a rapid sequence of changes in the voltage across a membrane” which in neurons constitutes “an all-or-nothing event that is initiated by the opening of sodium ion channels within the plasma membrane” (Source: Grider, Jessu, & Kabir, 2022 )
In assisting with the “propagation of signals along the neuron's axon toward synaptic boutons … at the ends of an axon [where] these signals can then connect with other neurons ” the action potential effectively fulfils a crucial signal filtering function essential to the actual (adaptive) learning processes, which in biologic brains are happening at the synaptic level.???
Transposing this idea into the mathematical realm of AI, we find the same concept represented by the activation function “used in artificial neural networks which outputs a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function 'fires', otherwise it does nothing. In other words, an activation function is like a gate that checks that an incoming value is greater than a critical number”. (DeepAI.org )
Just like in the biological realm, activation functions are immensely relevant for any AI model’s capability to learn beyond simple linear relationships, specifically because
“they add non-linearities into neural networks, allowing the neural networks to learn powerful operations. If the activation functions were to be removed from a feedforward neural network, the entire network could be re-factored to a simple linear operation or matrix transformation on its input, and it would no longer be capable of performing complex tasks…” Source: DeepAI.org
And it is exactly LLMs impressive, yet surely not flawless performance in light of complex, until now hence considered uniquely “human” tasks like reading, writing, drawing and similarly cognitively challenging feats which has garnered it quite some reputation. Along with the hope that this type of AI will in fact be a paradigm shift for human technology development. It is hence even more surprising, that across the breadth of the recent AI tool “explosion” there seems to be almost no attempt of using AI to address what has been dubbed as “humanity’s most burning problems ”.
Of course there might be several good reasons for this, including the still relatively astounding novelty of the recent “AI phenomenon” - although technically speaking we are probably just seeing a temporal peak in an otherwise very long development reaching back until the 1950s . So maybe a better explanation might point towards a general “lack of vision” in the technology space, or like Jiarui Wang has put it:
“The barrier to innovation in generative AI seems to be vision, not tech.”
Before denying the world of technology and engineering their visionary capabilities we need to however remind ourselves, that we are in fact talking about some very complex, system theoretical contexts, which cannot easily be reduced to simple, quantitative metrics. It is hence also impossible to apply AI modelling directly onto any of these complex problems, especially when the problems themselves are still so poorly understood that even educated academics falsely attribute the reasons for our current environmental crisis simply to the increasing levels of carbon emissions and thus equate and collapse it into a “climate crisis”. There is hence at least some truth in it when these academics admit that
领英推荐
“technology on its own is not the solution” and that “Machine Learning … is not a panacea for climate change” (source: Eleni Mangina )
especially as climate change is only a superficial symptom of a much larger underlying issue of our global human problem of “lifestyle-planet-misalignment”.?
At the very heart of the problem lies a much deeper, behavioural issue we have not managed to deal with, as it effectively seems too difficult to find a simple Machine Learning based optimisation solution for it: The problem of aligning and balancing (at least) three entirely differently structured and governed systems, of which at least one obviously does not have a voice in our (short-term oriented, capitalistic) societies: Our planetary environment (sustainability), our very own needs and wants (utility), as well as our individual as well as social wellbeing (health).
This brings us as close to the root cause of the problem as we can actually get, leaving us with the burning question: Is technology and capitalism too closely intertwined for us to realise that we have become unable to even articulate a practical application for this technology that does not solely optimise for utilitarian efficiency and hence economic growth? Looking at the actual problem we seem to have with adapting our behaviour, despite the fact that even the most obvious signs of this crisis by now have become ubiquitous and self-evident, leaves us at a loss when it comes to immediate optimism.
So where can we find inspiration for some actually useful directions on how to apply AI technology in a more balanced, and sustainable way? Maybe AI itself can help us find a hint in that regard?
As we can see from the table above, AI does indeed seem to have only a quite limited understanding of how to apply AI technology in order to solve environmental issues. But what’s even more striking: Not one of its suggested solutions requires any change of mindsets or behaviours in the underlying markets of the respective industries. It is almost as if we humans are seen as an archaic, unchangeable requirement in itself, and technology is only there to optimise its own performance, to optimise itself in feeding our ever growing appetite for energy, food, fashion, transport, or any other commodity we crave.
This consequently leads to the painful realisation that what we are dealing with here has nothing to do with a technological flaw of AI, nor is it primarily an alignment problem of modern capitalism . It is inherently rather a problem of a more fundamental mismatch between our stone age-like brain configuration and our modern environment consisting of often not just unfulfilling but unhealthy stimuli. This is where the need for actual human understanding and creativity comes into play in order to develop not only mono-dimensionally instrumental, but actually holistically sensible solutions. Solutions which are inspired by nature and designed to be in alignment not just with our short-term utilities, but primarily with a higher level integration of our as well as the whole planet’s long-term interests. Such an integration has to be understood as an intricately interconnected holistic ecosystem in which we ourselves are merely a part, not capitalistic owners of nature. Inspiration for such a higher level integration can be found in concepts like degrowth , biomimicry , systems thinking , regeneration , and regenerative capitalism .
In closing we can acknowledge the technological progress made on part of LLM based AI systems. At the same time we should also realise that we probably made only one small step towards leveraging the required holistic knowledge we need to re-align our way of living as human beings on this planet in a way that actually resembles the kind of natural balance evolution has shaped already millions of years before we even started to exist as species of “intelligent apes” in the first place. It will be interesting to see whether AI will actually empower us to evolve faster than we will be able to perfect all the exciting ways of abusing this new technology in the course of exploiting our natural habitat to the point of our own extinction.