Anticipating AI's next move ? article ④ ?
Marco van Hurne
I build Agentic AI companies | Data Science Strategist @ Beyond the Cloud | Data Governance | AI Compliance Officer Certified
If you want to catch up, just read the first three articles about the Generation-, the Productivity-, and the Business (re-)Design Space.
The entire model, and it's four Spaces (and four articles)
Beyond the Third Space - the race for Singularity
Scientists and Big Tech are both investing heavily in the pursuit of Artificial General Intelligence.
About 15 years ago I read two books from Ray Kurzweil. He is a respected futurist, with a dozen or so doctorates.
He writes accessible about Intelligent Machines:
Very interesting books, and I can certainly recommend them.
He has predicted in 2005 in his book "the Singularity is near" that AGI could be achieved by 2029.
AGI is reached when AI can perform any intellectual task a human can, but better.
Kurzweil is known for his accurate predictions and he now even believes this timeline might be a little conservative due to the exponential growth of AI.
Before we start!
If you like this topic and you want to support me:
Wetware instead of chips
A key development in this journey is the concept of "wetware".
In this new branche of neuroscience, the biological systems and computers merge together to form "wet" brain-like computers.
Scientists are exploring the use of living (brain) cells as computational units.
For example, in 2022, a team led by neuroscientist Brett Kagan at Cortical Labs showed that a cluster of human brain cells could learn to play Pong.
This experiment demonstrated that brain cells could adapt and learn in a way similar to computer algorithms.
Although it is certainly a breakthrough development, I am not sure if I like this very much.
Working with machines is something that I am enthusiastic about, but using animal or human brain cells as computers is a different story.
And that all has to do with the essence that separates living organisms from dead matter, and that is consciousness.
I am on the verge of publishing a new book, called "Exploring Consciousness - A Guide for AI Students", and in this book, I will investigate the concept of consciousness across time, cultures, religion, science, and intellectual traditions.
Continuing with the wetware; researchers at the University of Reading (Yoshikatsu Hayashi and Vincent Strong), have developed a hydrogel capable of learning.
This simple gel can "remember" and improve its performance in playing Pong.
The gel is made from electro-active polymers, and that changes shape in response to electrical signals. This can be used to show basic learning behavior. This means that even simple materials can possess memory and learning capabilities, and this opens up new possibilities for creating more advanced wetware systems outside the scope of repurposing living organisms.
And even though this is still the stuff of Science Fiction, it comes closer to reality every passing year.
If you have ever watched the Alien movie franchise, you are introduced to the "synthetic" android from Weyland-Yutani called Ash (played by Ian Holm), and in the later versions it is the synthetic Bishop (played by Lance Henriksen), followed up by David (the best version of him, played by Michael Fassbender).
Sorry for the nerdiness.
Ash is an advanced synthetic and he plays a central role in the events of all the films. He exhibits complex emotions, and even conniving motivations.
He is way more advanced then the HAL 9000 from Stanley Kubrick's film I started this story with. Bishop is a more sympathetic and trustworthy android who helps the crew fight against the aliens. Bishop is severely damaged because of the confrontation with the Queen Xenomorph (the alien antagonist), and his synthetic "blood" or fluid oozes out of his body.
Now that is what I call wetware…
Video Queen takes Bishop
----------------------------------ooo------------------------------
Higher level brain-machine interfaces
Brain-Machine Interfaces, which I discussed extensively in this piece, are another critical technology in this area.
Neuralink and Neurable are pioneering BMIs that allow direct communication between the brain and computers. Early trials have enabled people to control devices with their thoughts and that indicates that a seamless connection between humans and AI is possible.
Kurzweil also envisioned a future where humans and AI merge, which creates a new form of intelligence.
Kurzweil's vision of merging humans with AI through Brain-Machine Interfaces is not just theoretical. There is significant early research to support this potential. Beyond Neuralink, scientists like John Donoghue at Brown University have made breakthroughs with the BrainGate project.
This research has enabled paralyzed individuals to control robotic arms and computer cursors through thought alone.
And another key figure is Miguel Nicolelis at Duke University. He has demonstrated that BMIs can allow monkeys to control virtual limbs with their brains. These incredible advances hint towards a future where human cognition integrates with AI, and that leads to a whole new era in human-computer interaction.
----------------------------------ooo------------------------------
How quantum technology brings us closer to AGI
Quantum AI is another revolutionary development, probably required to bring us AGI.
Quantum computing is based on the beautiful, yet dumbfounding principles of quantum mechanics.
Quantum computers are able to process massive amounts of data at the same time. And scientists like Peter Shor, known for Shor's algorithm, and John Preskill, who coined the term "quantum supremacy" are leading figures in this field.
Quantum AI uses quantum computing to solve very, very complex problems at unprecedented speeds. And this will accelerate the development of AGI by allowing AI to learn, adapt, and reason more like humans.
领英推荐
Quantum AI's potential to speed up AGI is real, people!
Because traditional AI relies on classical computing, that processes data sequentially in CPU's or a bit parallel in others like GPU, NPU, TPU's (read the article: Chip happens !
But Quantum computers does more then sequential or parallel processing because it can process multiple possibilities at the same time (that is because of the quantum mechanics concept called superposition) and that is making them exponentially faster for certain tasks.
This capability will enable AI to develop solutions that are currently beyond our reach.
But there are social questions to be raised
I have hinted to that a couple of times before, and in this paragraph I am not going to repeat it again.
But I want to go beyond the basic ethical issues we are faced with at this moment
One question we need to answer is what it means to be human when our cognitive and physical abilities are enhanced by machines?
And how will our society adapt to a world where the boundaries between organic and synthetic life are increasingly blurred?
Since AGI is now within reach, we are able to develop wetware from braincells, or gels, and we are already experimenting with augmenting humans with AI, some profound ethical concerns arise.
First, the issue of consciousness and rights will becom critical.
If AGI and synthetic humans possess consciousness or exhibit human-like qualities, we will need to determine their moral and legal rights. There is this concept called "Robot Rights", or "AI rights", which is an area of discussion whether such entities should be granted rights similar to humans (autonomy, no harm, freedom, etc.). The legal frameworks also need to evolve equally.
Scholars like Nick Bostrom and David Chalmers have explored the implications of machine consciousness. And they have started to raise questions about the ethical treatment of AI entities.
Just think of the possibility that a synthetic being is conscious and could experience suffering or have desires similar to humans necessitates and is locked inside "box".
And another concern is identity and autonomy.
The use of brain cells in wetware to me blurs the line between human and machine very much!
And that raises the questions about personal identity and autonomy. Researchers like Henry Greely have discussed the implications of using human brain tissue in artificial systems, particularly in the context of neuroethics.
The potential for creating hybrid entities that are partly human and partly machine will introduce enormous complexities in defining what an individual identity actually is. And the prospect of augmenting human cognition through brain-machine interfaces will probably lead to disparities in intelligence and autonomy. And that will of course lead to further social inequalities.
Given these developments, I am very much in favor of more thought on this terrain, and more legal and ethical frameworks, because we don't want to compromise human dignity or create new forms of exploitation. And knowing humans, there will always be people that will abuse those new possibilities.
If you want to know more, these two resources are very good: https://plato.stanford.edu/entries/ethics-ai/ Super Intelligence by Vincent Muller, and https://www.amazon.com/Robot-Rights-David-J-Gunkel/dp/0262038620 Robot Rights by David Gunkel
An afterthought
This AI evolution framework began with simple AI supporting tools in the first space, where basic machines improved simple unimodal tasks like writing, image generation, but also making advanced analytics and predictions possible.
This set the foundation for the productive space, where AI became integral to work and communication, which is currently driving unprecedented productivity gains for companies, and it is introducing new forms of interaction with the world around us for the rest of us humans.
As we entered the redesign space, the technology is slowly starting to augment and even to merge with biology, with developments like wetware and Brain-Machine Interfaces which are blurring the lines between human and machine. And here, ethical questions about identity and rights emerged, which is challenging our understanding of what it means to be a person.
The potential future of the singularity—a point where AI moves beyond human intelligence—is exciting and a cause for concern.
This could happen as early as 2029, according to the futurist Ray Kurzweil and others in the industry.
And since AI advances at a high speed, there is a possibility of machines not only assisting us but also exceeding us. Quantum computing is expected to accelerate this process, because it allows AI to solve complex problems much faster.
But the integration of AI with humans, particularly through wetware and BMIs, raise a lot of ethical questions.
If machines can think and feel like humans, what rights should they have? How do we maintain human identity and autonomy in a world where technology can alter our very essence?
These not only ethical, but also very practical challenges that society must address. Because, the moment that singularity happens (if it occurs at all), will be a moment of enormous transformation.
It represents the ultimate convergence of the productive and redesign spaces, where human and machine intelligence merge into something entirely new.
This may lead to totally new developments and help us approach longevity escape velocity, the prospect of overcoming aging.
This could lead to longer lives, but it also raises questions about how society will manage resources and maintain social stability.
I am not sure if I would like to have Stalin, Mao, Atilla, and Genghis around in 2030.
looking back the journey went from the generative, the productive, and the redesign spaces toward the singularity and that presents incredible opportunities.
The technologies tjat we are developing today will transform our world in ways that I can barely imagine. But with these opportunities comes a responsibilities. I am not a huge fan of regulation, but in this case, I am an advocate of more stringent and centralized global cooperation.
We must approach this future with care, wisdom, and a deep sense of responsibility to make sure that these advancements will benefit all of us and to avoid the pitfalls of inequality and exploitation, and the loss of identity.
Because the choices that we make today will shape the future for generations to come.
If you have come this far, you are a die hard geek as I am. And I thank you from the bottom of my heart for sticking around with me for so long.
And I have something for you:
Solve this little math puzzle, and get a copy of my latest book
Exploring consciousness: a guide for AI students
What is: 5×8?9+7=?
Post your answer in the comments below and I'll contact you!
Thank you for sticking around !
Signing off - Marco
Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.
Top-rated articles:
Linux System Administrator || AIOps-Oriented DevOps Enthusiast || Cloud Infrastructure Architect (AWS, Azure) | CISSP
2 个月Fascinating insights, Marco! The concept of 'wetware' and the merging of biological systems with computers is both intriguing and unsettling. It's remarkable to see how far we've come with technologies like Brain-Machine Interfaces and Quantum AI, pushing us closer to the reality of AGI. The ethical implications, particularly around consciousness and the rights of AI, are critical discussions we need to have as we advance. Your exploration of these topics really highlights the balance between innovation and ethical responsibility. Looking forward to reading more from you!