AI Experts: A House Divided

AI Experts: A House Divided

The realm of artificial intelligence (AI) has evolved into a cacophony of competing voices, each vying for attention and propounding extreme viewpoints on every minutae within their echo chambers.?

Amidst this din, a recent admission by Sam Altman (https://bit.ly/4851qQp) indirectly acknowledged the existence of a clandestine AGI product, lending credence to whispers of significant progress towards machines achieving genuine human-like intelligence. However, thisstands in stark contrast to Meta's AI chief's assertion, again over the weekend, that AGI remains decades away (https://cnb.cx/3sSkrGR).

The vagueness surrounding terms like "human-level intelligence" allows all sides to persist in claims of accuracy. However, between AI optimists and pessimists, little common ground exists anywhere. Here is a quick list of a dirty dozen with deep divides:

  • The timeline for AGI
  • Long-term impact on jobs
  • LLMs ability to do harm
  • Whether governments should regulate
  • Biases within LLMs and trajectory over time
  • LLMs ability to drive innovation
  • Environmental impact
  • Whether GenAI creates anything truly novel
  • GenAI's ability to define intentionality and purpose
  • Trends in deep fakes, hallucinations, and the likes
  • Legal issues in the training datasets
  • Open-source versus closed-source models, and implications from the model transparency

The best strategy in financial markets is to adopt a balanced and optimistic outlook on the innovation part. There are good reasons to be hopeful about the future of AI, especially given the rapid and unexpected advances that have been made since mid-2022. Moreover, the world's best capital and talent are being poured into the field, creating a positive feedback loop that increases the chances of success. Even if one is skeptical about all other risks and other things listed above, one cannot ignore the massive efforts and investments to push the boundaries further.

This does not mean that one can predict what will happen next with any certainty. We are in the season of extensive new year forecasts, and many of them are dominated by AI-related predictions. However, most of these AI-predictions, in stark contrast to - say - election-related predictions, are vague and generic. Even those deeply involved in the field's R&D have little idea where this is headed beyond a few weeks.

Suppose some models learn math from scratch, generate completely new scientific theories/solutions, innovate new drugs or other types of useful synthetic molecules, create completely secure software without human assistance, or autonomous AI robots navigate unpredictable environments like forests for extended periods beyond their training.etc., in any of the next three to five years. In that case, the debate will be decidedly won by the optimists, regardless of whether pessimists admit it.?

In innovation investing, evidence-based investing should be preferred over spray-and-pray. When everyone claims "we do AI," one should search for material innovations with the ability to generate significant instant revenues and high entry barriers.?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了