The Splitting of Ai
Aug. 2, 1939, Albert Einstein scrawled his signature on a two-page letter that changed the course of history. The letter addressed to President F. D. Roosevelt, stated that he expected that the element Uranium might be used as a new and important source of energy in the immediate future.
A paragraph contained the prophetic words: "… it is conceivable —though much less certain—that extremely powerful bombs of a new type may thus be constructed."
As we enter the "renaissance" of artificial intelligence (AI), along with the ever-increasing ability of computing power and access to not only big data but freethinking machines one could be forgiven that AI in today's time could carry a very similar message. Thirteen years ago, the Economist claimed that the most valuable resource was no longer oil, but in fact data, and today we have realised the inherent value. It has fueled innovation, creating new products and services, while improving the existing ones and swayed political outcomes. It's even sold as an immaterial commodity.
There is AI, and there is AI.
To understand this, we need to understand the types of AI. Research indicates to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities or equivalent levels of proficiency is used as the foundation for determining the types of AI.
Reactive Machines
Known as Artificial Narrow Intelligence (ANI), are constrained and attempt to simulate the human mind's ability to respond to different kinds of stimulation. They do not have memory-based functionality and cannot use previously gained experiences to inform their present actions, so they cannot learn. And could only be used for automatically responding to a limited set, or combination of inputs.
Limited Memory
Artificial General Intelligence (AGI) has all the capabilities of purely reactive machines (ANI), are also capable of learning from historical data sets to make decisions. Almost all the current applications fall under this category are trained by substantial volumes of data that they store in their memory to develop a reference model for solving future problems.
Theory of Mind
The next level of AI is one that researchers are currently engaged in innovating and will be able to better understand the object that it is interacting with by determining their needs, emotions, beliefs, and thought processes.
Self-aware
Most popularly referred to as the point of singularity or Artificial Superintelligence (ASI), currently only exists hypothetically. Where the AI has evolved to be so similar to that of the human brain. It has developed self-awareness and will not only be able to understand and evoke emotions in those it interacts with, but poses its very own emotions, needs, beliefs, and possibly desires. This will most likely always be the ultimate objective of all AI research.
Algorithmic bias
It has been said that if two data points exist is a trend, and three is a story, and it used to be a lot easier to spot outliers in a stack of data and apply methods to test or validate it.
However, with the current access to volumes of data, that is often unvalidated. I believe that we will begin to see an algorithmic bias emerge and multiply as we build on top of existing theories. Repeatable errors can exist in a system that creates unfair outcomes, such as a privilege to one group of users over another and can be due to many factors from the design of the algorithm, the unintended or unanticipated use or the decisions relating to the way data is coded, collected, selected and used to train the machine.
Impacts can range from privacy violations to intentionally reinforcing biases (race, gender, sexuality, ethnicity and hate) and as they expand their ability to organise society, politics, institutions and behaviour, it is very clear with the ways in which unanticipated output and manipulation of data can impact the physical world. This is mostly because we considered AI and algorithms to be neutral and unbiased.
The challenge is that a majority of the proprietary nature of algorithms, which are typically treated as trade secrets and when full transparency is provided, the complexity of specific algorithms poses a barrier to the entire understanding of their function.
Do we trust it, and if so, who?
So does one begin to question if the current search engine, social media platform or self-service AI tool is still truly in your best interest? Or do we continue to claim ignorance and accept the present "freemium advertising, click bate" business model?
82 years after Albert Einstein wrote those prophetic words, we need to state that 'AI, not just conceivably, but certainly enables extremely powerful 'bombs' be it of a different nature than that of the atom bomb, but one of similar power that can fundamentality impact humanity.
After Hiroshima, a massive global effort was needed to try and ensure that atomic power was never again abused. The power of AI is coming to the hands of many, and we can be less sure of how such power can be controlled and to ensure that it will never be abused.
If the past is anything to go by with regards to the regulations and laws around the usage of personal data, one can't be too confident. Legal frameworks have begun to be address issues around data bias, and the use of AI at the EU General Data Protection Regulation comity and more recently, several global corporations have joined the #StopHateforProfit movement.
One could argue that the self-aware AI is the only authority we could 'safely' give such power too. But even then the question needs to be asked: how did that AI become self-aware? We all know the importance of nurture in human development. For now, we can only hypothesise that the same is true to AI development and that it is even more critical with regards to AI.
As we enter these unchartered waters, one thing is clear: our collective human moral compass is needed now more than ever to steer the ship of humanity, and this powerful tool we have developed.
General Manager & Operations Director | Project Developer
4 年Great article Leon C.