The Weekly Waffle: You Can't Handle the Truth!
It’s been a great week for artificial intelligence. ChatGPT held its developer day which brought us an AppStore-like shop for apps using the centralized AI and the news that ChatGPT-5 is just around the corner. The? Economist had a special report on “omnistars” — how AI will turn artists into even bigger celebrities through omnipresence on any medium. And I sat down with my CEO and the Uphold team to figure out what we should do next with artificial intelligence.
Then Bittensor launched its 4th or 6th spectacular subnet, and people realized it was the real deal -- the price of TAO tripled in a short time. It's killer use case, which will likely come from academia, is as yet elusive, but I'm totally on board with decentralized AI solutions.
There are three types of attitudes you can take on AI, according to John Vervaeke, the Canadian cognitive psychologist who lately took an interest in AI due to his work on relevance realization. Relevance realization is important in psychology, and it's even more important in AI: how to you tell a computer which data is important, and what can be ignored?
The three attitudes I see in AI today echo Vervaeke's original descriptions: You can be a Zoomer, a Doomer, or a Former. Zoomers are the utopians who believe in the power of AI, and our power to harness it. The Doomers are expecting the AI apocalypse, the coming war of humans vs machines, and our extermination in a simplistic game-theory model. And then there are the Foomers, catastrophists who see AI growing exponentially so fast that we immediately need a global authority to put a stop to AI development. One could argue that there are only two: optimists and pessimists, but the difference between Doomer and Foomer is important. Doomers think its all in vain and too late anyway, where as Foomers believe that we need strict rules (which ultimately would benefit only patent holders and centralized operators). Where I come from (the computer science tribe of decentralization fanatics), that's a bad, bad thing.
In any case, all of them are wrong. Yes, AI will cross the threshold to AGI at one point, and once we figure out what “consciousness” actually is, they may have some form of self-awareness and human-like properties. But that doesn’t mean they will control or exterminate us. If consciousness is a quantum effect as Penrose stipulated in the 80s, then we need one more ingredient in our artificial broth: quantum physics, ie quantum computing. That is coming also, in the shape of more and more "quantum-resistant blockchains" -- an idiotic marketing term. But we now have a 1000 qubit quantum computer (also announced this week), so yes, bring it on, quark army! (Microsoft Azure Quantum is available for everyone, just like ChatGPT.)
AI, despite the hype, the promises, and the fast growth, is a child. A child that needs good care, nurturing, and discipline. It doesn’t need overarching government regulation, or to be locked up in the basement never to be seen or heard. It needs education (training data) and accountability (blockchain). We now finally have blockchains like Kaspa $KAS that are fast enough to handle the data deluge. Kaspa would make an excellent data layer for an Internet Computer like ICP or -- if it survives, good old Ethereum. (I don't hold much hope for the latter).
There is no realistic way to stop or control the development of artificial intelligence. Humans are naturally curious. We want the truth and we are prepared to “willingly suffer to discover it” (Vervaeke) We exploit it and transform it into something we can sell (see the many ChatGPT skins in app stores, and the many fake blockchain AI projects launching these days on the promise of rising token prices, without any real innovative capacity in them).
领英推荐
The problem is: we can’t handle the truth.
We have no experience in educating a child-like AI. We have a tendency to escalate everything, and turn every truth into a truth that feeds our avarice. We have created massive number-crunching machines in the hope of finding the truth and subjugating it to our will. And we are very, very impatient (see the release hype around ChatGPT-5). You have millions of data sets — I have trillions.
In this race for the grand unveiling (will it augment us, turn us into cyber humans, or exterminate us), big corporations with the deepest pockets seem to have the upper hand. They don’t though, because their greed (sorry Sam) makes them commit fatal errors. Data pollution in centralized AI is a real problem. Some speak of “Datageddon” when it all goes down the drain,? and the end of training data by 2026 for meaningful human-machine commingling.
This is why projects like Bittensor and Roko Network are so important. They are fully decentralized and on the blockchain. They have access to the wisdom of the masses and a lot less risk of turning on us or drowning in bad data as do centralized operators like OpenAI or Google.
The three stages of AI (Narrow or Weak AI, AGI, General or Strong AI) and Super-Intelligent AI (Profound AI, ASI) will happen in the next decade, there is no doubt in my mind. No president, no university dean, and no “global contract to regulate AI” will divert us from the path to either enlightenment or devastation. Profound AI will be the deal clincher. It is a universal problem solver in all domains that can solve well-defined problems (AI) ill-defined problems (AGI) and — here is the key “undefinable problems”)
Humans have an amazing capacity to think, ponder, and muse about “undefinable” problems. The final frontier is always the unknown, and because we are naturally curious, we dive into black holes and try to get to the first picosecond of the Big Bang, and why children open every gadget to see what’s inside.
Children do that, like AI at the present time, but ultimately they manage to grow up nicely (usually) and learn how to behave in society. By nature, AI is not evil. By nurture, it will be bent to our will, so it behaves in our interest (alignment problem).
We do like simple answers to complex questions because most of us don’t have the time, the will, or the intellectual capacity to understand complex questions. How often have you heard the phrase “Explain it to me like I’m a five-year-old!” In Douglas Adams’ Hitchhiker’s Guide to the Galaxy this is brilliantly and hyperbolically explained as the “Answer to the Ultimate Question of Life, the Universe, and Everything is 42.” I can live with that.
AI is to us like the Wizard was to Dorothy. She had the curious guts to pull back the curtain. When we do so and expose the inner workings and future power of AI (we’ll need blockchain-based accountability for that), what will we find?
What do you see?
Managing Director @ Buckingham Electrical Ltd | NIC-EIC Approved Company Providing Industrial Electrical Solutions
11 个月https://www.dhirubhai.net/feed/update/urn:li:activity:7142119236000260096/
Kaspa is indeed a very exciting BlockDAG project. The Phantom GhostDAG paper has made a significant impact in this area. There is also another intriguing BlockDAG project that, in contrast, relies on PoS and is EVM compatible right from the start. One of its founders, a former Stanford graduate named Steven Pu, has been instrumental in its development. They launched their Mainnet in April 2023. This project is called Taraxa, and based on my research, it's definitely worth keeping an eye on as well.