SINGULARITY / 09.06.2023
ARTIFICIAL INTELLIGENCE WEEKLY REVIEW
CATHIE WOOD BULLISH ON BITCOIN AND AI CONVERGENCE
The ARK Invest CEO shares her views on the intersection of Bitcoin and artificial intelligence, highlighting its economic implications.
In a recent X (formerly Twitter) post, Cathie Wood, the CEO of ARK Invest, expressed her optimistic view on the intersection of Bitcoin and artificial intelligence (AI).
In the post ,?Wood hinted at the transformative potential in the dynamic synergy between AI and Bitcoin, emphasizing the possibilities and positive implications the technologies hold for diverse industries and the overall economic landscape.
Backing Wood’s optimistic outlook is a research document published by ARK Invest titled “Investing In Artificial Intelligence: Where Will Equity Values Surface?,“ suggesting that both Wood and ARK Invest are assessing the significance of AI within investment strategies.
Throughout the years, Wood has allocated investments to various AI-related stocks, demonstrating her strong belief in the rising technology. Wood’s well-known enthusiasm for Bitcoin is evident through ARK’s?endeavors concerning a Bitcoin exchange-traded fund (ETF) . Furthermore, ARK is no stranger to digital asset sector investments, with?substantial holdings in Coinbase and Robinhood .
The document also highlights ARK Invest’s strategies that have reaped rewards from investments in artificial intelligence tech stocks. The ARK Disruptive Innovation ETF, dedicated to AI and other pioneering technologies, outperformed the Nasdaq 100 Index, achieving a significant mid-year profit of 41.2%.
Wood’s post, along with ARK’s research, illustrates the growing influence of AI in the realm of investments. The fusion of Bitcoin and AI can potentially trigger a transformation in corporate operations, potentially reshaping productivity and cost dynamics. As investors explore fresh avenues for growth, Wood’s nod to Bitcoin and AI could see more investment flowing into the two technologies in the future.
HOW THE AI REVOLUTION WILL RESHAPE THE WORLD
TIME by MUSTAFA SULEYMAN, the co-founder and CEO of Inflection AI , in 2010 he co-founded DeepMind, which was acquired by 谷歌 , and is the author of the upcoming book The Coming Wave
We are about to see the greatest redistribution of power in history.
Over millennia, humanity has been shaped by successive waves of technology. The discovery of fire, the invention of the wheel, the harnessing of electricity—all were transformational moments for civilization. All were waves of technology that started small, with a few precarious experiments, but eventually they broke across the world. These waves followed a similar trajectory: breakthrough technologies were invented, delivered huge value, and so they proliferated, became more effective, cheaper, more widespread and were absorbed into the normal, ever-evolving fabric of human life.?
We are now facing a new wave of technology, centered around AI but including synthetic biology, quantum computing , and abundant new sources of energy. In many respects it will repeat this pattern. Yet it will also depart from it in crucial ways only now becoming clear. Amidst all the hype, the hope, the fear, I think the fundamentals are getting lost; the unique characteristics of this wave are getting missed in the noise. Understanding them, seeing what, exactly, is changing, is critical to understanding the future.
AI is different from previous waves of technology because of how it unleashes new powers and transforms existing power. This is the most underappreciated aspect of the technological revolution now underway. While all waves of technology create altered power structures in their wake, none have seen the raw proliferation of power like the one on its way.?
Think of it like this. Previous era’s most powerful technologies were generally reserved to a small capital rich elite or national governments. Building a steam powered factory, an aircraft carrier or a nuclear power plant were costly, difficult and immense endeavors. With the leading technologies of our time, that’s no longer going to be true.??
If the last great tech wave—computers and the internet—was about broadcasting information, this new wave is all about doing. We are facing a step change in what’s possible for individual people to do, and at a previously unthinkable pace . AI is becoming more powerful and radically cheaper by the month—what was computationally impossible, or would cost tens of millions of dollars a few years ago, is now widespread.
These AIs will organize a retirement party and manage your diary, they will develop and execute business strategies, whilst designing new drugs to fight cancer. They will plan and run hospitals or invasions just as much as they will answer your email. Building an airline or instead grounding the entire fleet each becomes more achievable. Whether it’s commercial, religious, cultural, or military, democratic or authoritarian, every possible motivation you can think of can be dramatically enhanced by having cheaper power at your fingertips. These tools will be available to everyone, billionaires and street hustlers, kids in India and pensioners in Beverly Hills, a proliferation of not just technology but capability itself.
Power, the ability to accomplish goals, everywhere, in the hands of anyone who wants it. I’m guessing that’s going to be most people. This is far more empowering than the web ever was.?
And it’s coming faster than we can adequately prepare for. This is an age when the most powerful technologies are open-sourced in months, when millions have access to the cutting edge, and that cutting edge is the greatest force amplifier ever seen. This new era will create giant new businesses, empower a long tail of actors—good and bad—supercharge the power of some states, erode that of others. Whether a giant corporation or a start-up, an established party or an insurgent movement, a wild-eyed entrepreneur or a lone wolf with an ax to grind, here is an immense potential boost. Winners and losers will emerge quickly and unpredictably in this combustible atmosphere as power itself surges through the system. In short this represents the greatest reshuffling of power in history, all happening within the space of a few years.
Those most comfortable today look vulnerable. Even as the discourse around AI has reached a fever pitch, those with power today, the professional classes, feel shockingly unprepared for the disruptions and new formations of power this tumult will bring. They—the doctors, lawyers, accountants, business VPs—will not emerge unscathed, and yet most I speak to are still incredibly blasé about the changes afoot. It’s not just automated call centers. This wave will fundamentally reshape and reorder society and it is those with most to lose, reliant on established capital, expertise, authority and security architectures, who are precisely the most exposed.
I’ve seen this kind of willful blindness before. I call it “pessimism aversion”: a tendency to look away from sweeping technological change and what it really means. Until recently it was a common affliction of the Silicon Valley elite, many of whom pursued technological “disruption” without considering the likely outcomes. The arrival of generative AI and other AI products has started to change that. Although there is much further to go, leaders in Silicon Valley have begun taking a more proactive and precautionary approach to the development of the very largest AI models. But more widely it’s vital that societies facing this wave do not dismiss it as hot air, turn away, and get caught out. The preparation for what I call containment, a comprehensive program of managing these tools, needs to begin now.?
As we start to see power itself proliferating, its distribution and nature fundamentally changed, pessimism aversion is no answer. It’s time to confront the consequences of this shift in who can do what, when and how, understand what it means, and begin to plan for how we can control and contain it for everyone’s benefit. History can be a useful guide. But with AI, synthetic biology and the rest, we can be confident of one thing: we are facing the genuinely unprecedented.?
EVERYBODY WILL HAVE THEIR OWN AI-POWERED "CHIEF OF STAFF"
Mustafa Suleyman , the co-founder of DeepMind, Google's AI division, told CNBC during an interview that everybody is going to have their own AI-powered personal assistants within the next five years as the technology becomes cheaper and more widespread.
In particular, Suleyman, now the CEO of Inflection AI, the tech startup behind an AI chatbot called Pi , said that everybody will have access to an AI that "knows you," is "super smart," and "understands your personal history."
Suleyman, who co-authored a book titled "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma " that documents the development of AI, even predicts that AI will be able "preserve things in its working memory." In turn, he says the technology can help make people's daily lives just a little bit easier.
"It will be able to reason over your day, help you prioritize your time, help you invent, be much more creative," he said. "It will be a research assistant, but it will also be a coach and companion."
As the technology continues to evolve, Suleyman believes that AI's role in people's lives will go beyond just personal assistance.
"The way I see it is that in 5 years, everybody is gonna have their own 'chief of staff,'" he said, which refers to a high-level position at a company that is intended to help executives make better business decisions, according to the Harvard Business Review . Some consider a chief of staff a right-hand person to the boss — and that's what an AI version could be.
Like a chief of staff, Suleyman said that AI will "intimately know your personal information, be completely aligned with your interests, and help you manage you manage and process all the information you need."
Suleyman isn't the only tech leader who sees the revolutionary potential in AI.
Bill Gates, the cofounder of Microsoft, wrote in a seven-page letter that AI is "as fundamental" as the creation of the internet, and predicts that the technology can help make the jobs of healthcare workers and teachers easier.
Tim Cook, the CEO of Apple, told investors on an earnings call earlier this year that the technology has "enormous potential" to "affect virtually everything we do" at the iPhone maker.
As for Suleyman, he says it's only a matter of time before everybody has access to AI's impressive capabilities.
read more:
ONE OF THE GODFATHERS OF AI INSISTS OUR FEARS OF AI'S EVER-EXPANDING COMPETENCY ARE OVERBLOWN
In late January, just a few months after OpenAI unleashed ChatGPT on the world, Meta’s vice president and chief AI scientist Yann LeCun dashed out a tweet that was at once a pointed dig at a rising competitor and a gentle nudge to his own company’s higher ups.
“By releasing public demos that, as impressive and useful as they may be, have major flaws, established companies have less to gain and more to lose than cash-hungry startups,” LeCun wrote . “If Google and Meta haven’t released chatGPT-like things, it’s not because they can’t. It’s because they won’t.”
At the time, at least, that was true. For years, corporate giants like Google and Meta had all of the technical prowess of OpenAI and a sliver of its risk appetite. But less than a year later, ChatGPT has changed all that, kicking off a race among once-cautious companies to turn the science they’d been working on behind closed doors into public-facing products.?
Now, Meta has answered LeCun’s subtle challenge by taking an arguably greater risk than anyone with the debut of Llama 2, its latest large language model. Unlike GPT-4, which is available from OpenAI for a fee, Llama 2 is freely available for commercial and research use, throwing the gates wide open to almost anyone who wants to experiment with it. (Though, as purists note , while Meta describes Llama 2 as “open source,” access to its training data is still closed off.)
To LeCun, Meta’s about-face was a welcome change: Expanding access to this technology and letting other people build stuff on top of it, is, he argues, the only way to ensure that it’s not ultimately controlled by a small group of Silicon Valley engineers. “Imagine the future when everyone uses some sort of chatbot as their main interface to the digital realm. You don’t go to Google. You don’t go to Facebook. . . . You just talk to your virtual assistant,” LeCun says. “You want that AI system to be open source and to be transparent, because there’s going to be a lot riding on it.”?
LeCun, of course, is one of the most prominent leaders in this small group of Silicon Valley engineers. Often referred to as one of the “godfathers of AI,” LeCun pioneered in the 1990s and early 2000s the subset of machine learning known as deep learning, upon which large language models like GPT-3 and GPT-4 are built. In 2013, LeCun created the Facebook AI Research lab, which he said at the time “would brin[g] about major advances in Artificial Intelligence.”
But despite the company’s investment in research, LeCun saw firsthand how it often resisted releasing this technology into the wild out of fear of legal risk and public backlash. In fact, less than two weeks before ChatGPT’s debut, Meta released its own chatbot called Galactica, but pulled the plug on it three days later, after it was widely panned for spewing nonsense—you know, kind of like ChatGPT .
With Llama 2, Meta is making no such apologies. LeCun acknowledges that giving this technology away for free comes with a risk of user abuse. Facebook itself started as a social network for college kids and wound up being used to subvert elections and fuel terrorist propaganda. There will undoubtedly be unintended consequences of generative AI. But LeCun believes giving more people access to this technology will also help rapidly improve it—something that LeCun says we should all want.?
He likens it to a car: “You can have a car that rides three miles an hour and crashes often, which is what we currently have,” he says, describing the latest generation of large language models. “Or you can have a car that goes faster, so it’s scarier . . . but it’s got brakes and seatbelts and airbags and an emergency braking system that detects obstacles, so in many ways, it’s safer.”
Of course, there are those who believe the opposite will be true—that as these systems improve, they’ll instead try to drive all of humanity off the proverbial cliff. Earlier this year, a slew of top AI minds, including Geoffrey Hinton and Yoshua Bengio , LeCun’s fellow “AI godfathers” who shared a 2018 Turing Award with him for their advancements in the field of deep learning, issued a one-sentence warning about the need to mitigate “the risk of extinction from AI,” comparing the technology to pandemics and nuclear war.
LeCun, for one, isn’t buying the doomer narrative. Large language models are prone to hallucinations, and have no concept of how the world works, no capacity to plan, and no ability to complete basic tasks that a 10-year-old could learn in a matter of minutes. They have come nowhere close to achieving human or even animal intelligence, he argues, and there’s little evidence at this point that they will.?
Yes, there are risks to releasing this technology, risks that giant corporations like Meta have quickly become more comfortable with taking. But the risk that they will destroy humanity? “Preposterous,” LeCun says.
CHATGPT GLOSSARY: 41 AI TERMS THAT EVERYONE SHOULD KNOW
With Google, Microsoft and just about every other company getting into AI, it can be hard to keep up with the latest terminology. This glossary helps.
ChatGPT , the AI-chatbot from OpenAI, which has an uncanny ability to answer any question, was likely your first introduction to AI. From writing poems, resumes and fusion recipes, the power of ChatGPT has been compared to autocomplete on steroids .?
But AI chatbots are only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on country of origin is cool, but its potential could completely reshape economies. That potential could be worth $4.4 trillion to the global economy annually , according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence.?
As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you're trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know.?
This glossary will continuously be updated.?
—
Artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities.?
AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias.?
AI safety: An interdisciplinary field that's concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans.?
Algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own.
Alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions toward humans.?
Anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it's happy, sad or even sentient altogether.?
Artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks.
Bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.
Chatbot: A program that communicates with humans through text that simulates human language.?
ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology.
Cognitive computing: Another term for artificial intelligence.
Data augmentation: Remixing existing data or adding a more diverse set of data to train an AI.?
Deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.
Diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.
Emergent behavior: When an AI model exhibits unintended abilities.?
End-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It's not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once.?
Ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues.?
Foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity.
Generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it's authentic.
Generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.
Google Bard: An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn't connected to the internet.
Guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn't create disturbing content.?
Hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren't entirely known. For example, when asking an AI chatbot, "When did Leonardo da Vinci paint the Mona Lisa?" it may respond with an incorrect statement saying, "Leonardo da Vinci painted the Mona Lisa in 1815," which is 300 years after it was actually painted.?
Large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.
Machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content.?
Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It's similar to Google Bard in being connected to the internet.?
Multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech.?
Natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules.
Neural network: A computational model that resembles the human brain's structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.?
Overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data.?
Parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions.
Prompt chaining: An ability of AI to use information from previous interactions to color future responses.?
Stochastic parrot: An analogy of LLMs that illustrates that the software doesn't have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them.?
Style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.?
Temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks.?
Text-to-image generation: Creating images based on textual descriptions.
Training data: The datasets used to help AI models learn, including text, images, code or data.
Transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.
Turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine's ability to behave like a human. The machine passes if a human can't distinguish the machine's response from another human.?
Weak AI, aka narrow AI: AI that's focused on a particular task and can't learn beyond its skill set. Most of today's AI is weak AI.?
Zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.?