Diminishing Returns of AI
I'm pretty I know.

Diminishing Returns of AI

Turn back now... this might not be for you.

The following is a conversation between GPT-4o and me on Perplexity.ai about the nature of diminishing returns in relation to artificial intelligence. Therefore, its style and tone of voice are not as 'enjoyable' as my previous article (in case you found that enjoyable??).

I'm sharing this because I want to be transparent about how I do a lot of my research; I want to lead by example. Talk the Talk, Walk the Walk, so to speak. And I believe its insights are very valuable.

Prologue

As a direct result of ongoing challenges on LinkedIn and X from people convinced that diminishing returns are already slowing down the improvements of AI and Technological progress, I've taken the time to collaborate with GPT-4o to explore this premise. As always, I've taken historical trends as a guideline rather than looking at actual, granular, scientific breakthroughs or technological developments specifically.

For those of you familiar, I consider the possibility that a science like Psycho-History, as imagined by Isaac Asimov, might well be possible one day as an advanced continuation of the Drake Equation and the Kardashev scale may be combined with statistics, economics, and anthropology, The world in which we live, in my view, is a collection of algorithms with a singular purpose: to grow entropy.

When we take this premise as a starting point, all of history can be used as a collection of signals that might help us predict the future, much like a weatherman looks at weather patterns from the past to understand what might be coming in the (near) future.

I do not claim to be Nostradamus, but it is more realistic to look at the future through the lens of history than to try and determine what is, or is not, realistic based on the latest breakthroughs or developments.

After all... an ant looking at grass stalks won't see the bulldozer coming... (ok I could have picked a less ominous example.

One more thing.. the GPT-4o responses are those I took from the 'Perplexity page' we generated, as the original answers were too long and detailed to make for a good article. The original answers were more indepth and detailed.

Without further ado...


Diminishing Returns in AI

Aragorn: "What's the latest on diminishing returns concerning Artificial Intelligence? Specifically, transformer and LLM models? People keep telling me it's going to stop progress?"

GPT-4o (perplexity):

Concerns about diminishing returns in AI development stem from several factors:

? Increasing computational and financial costs to train larger models.

? Modest performance gains from scaling up model size and training data.

? Environmental impact of energy-intensive training processes.

? Challenges in sourcing high-quality training data as existing datasets are exhausted.

However, these concerns may not account for ongoing innovations in model architectures, training techniques, and hardware. Recent advancements like mixture-of-experts models and more efficient inference methods suggest pathways to mitigate scaling challenges.. Additionally, the application of AI to accelerate scientific research and technological development in other fields could lead to breakthroughs that overcome current limitations.

Graph of Data used to train AI models, the scale is Logarithmic, showing the acceleration of the Exponential Nature.


AI Accelerating Breakthroughs

Aragorn:"In your answer, I don't see any consideration of the impact of scientific accelerated breakthroughs and subsequent new technological breakthroughs. Did you factor those in? For example: If GPT-5 is indeed as good as expected, let's say 20% better than GPT-4o at 40 trillion tokens of training data. It would have an IQ of around 160? 180? This kind of improvement across the board of A.I. could help fuel the progress of Nuclear Fusion, Fission, developing new, more power-efficient chips, etc not to mention the finetuning of models and the development of new methods of improving models that require less on more training data. Am I wrong?

GPT-4o

Advancements in AI models like GPT-5, with potential IQ equivalents of 160-180, could significantly accelerate breakthroughs across various scientific and technological domains. These improvements may fuel progress in nuclear fusion research, energy-efficient chip development, and other cutting-edge fields. AI's ability to enhance predictive capabilities, optimize processes, and facilitate interdisciplinary research can help overcome current limitations and speed up innovation cycles. By automating tedious tasks and providing advanced data analysis tools, AI allows human researchers to focus on high-level problem-solving, potentially leading to faster and more impactful discoveries.

DaLL-E generated image for this article.


Historical Tech Trends

Aragorn: "And if we follow this line of thought, would it then not be very, very possible that on the whole, we actually will not see any diminishing returns, but rather an acceleration? Looking back on the past 100 years, the development of technology, and especially compute and computer science, has only accelerated. Whenever skeptics claimed things would slow down, breakthroughs resulting from prior progress seemed to balance out any diminishing returns. Please correct me if I am wrong or being overly optimistic, but please base your answer on the available data and papers."

Technological progress has historically followed an exponential growth pattern, with each advancement building upon previous ones to create rapid and often unforeseen breakthroughs. This trend has been observed across various fields, including computing, telecommunications, and biotechnology. Throughout history, periods of skepticism about the potential for further technological advancements have often been followed by significant breakthroughs that propelled progress forward. For example, concerns about the limits of Moore's Law were countered by innovations in chip design and manufacturing. These historical patterns suggest that current concerns about AI development reaching a plateau may be shortsighted, as they fail to account for the potential of unexpected innovations and synergies between AI and other emerging technologies.

99,99% of all technology was created in the last 0,1% of human history.


AI as Innovation Catalyst

Aragorn: "Based on everything you've shown me, it seems like the proponents claiming A.I. development is going to come to a screeching halt are sorely misjudging the situation as a result of not taking into account the grander scheme of things and historical trends. Would that be a correct assumption?"

As a general-purpose technology, AI has the potential to drive significant productivity improvements across various sectors, including healthcare, energy, and manufacturing[1]. Its ability to enhance scientific research by providing advanced tools for data analysis, simulation, and prediction could lead to breakthroughs in fields like nuclear fusion and energy-efficient chip design. Estimates suggest that generative AI could boost productivity growth by 20% to 50% in the coming decade. Additionally, AI's capacity to facilitate interdisciplinary research by identifying connections between different fields may lead to innovative solutions that are not apparent through traditional methods, further accelerating progress in various scientific and technological domains.

The law of Accelerating Returns as proposed by Ray Kurzweil. First observed by David Burnham in 1910.


Concluding Remarks and Outlook

The debate surrounding diminishing returns in artificial intelligence, particularly concerning large language models and transformer architectures, has raised important questions about the future trajectory of AI development. While concerns about scaling challenges, computational costs, and environmental impacts are valid, the overall picture suggests that AI progress will likely continue, albeit potentially through different pathways than simple model scaling.

The premise of diminishing returns is supported by observations of increasing computational and financial costs for training larger models, with only modest performance gains[1]. However, this view may be overly narrow, failing to account for ongoing innovations in model architectures, training techniques, and the broader impact of AI as a catalyst for innovation across multiple domains.

Historical trends in technological progress demonstrate that periods of perceived stagnation are often followed by unexpected breakthroughs[1]. In the case of AI, its potential as a general-purpose technology to accelerate scientific research and drive productivity improvements across various sectors suggests that even if current approaches face limitations, new avenues for advancement may emerge.

In conclusion, while the challenges of scaling current AI models are real, the transformative potential of AI as an innovation catalyst, coupled with ongoing research into more efficient architectures and training methods, indicates that AI development is likely to continue progressing, albeit potentially through different pathways than simple scaling. This conclusion aligns with peer-reviewed research highlighting the potential of AI to drive significant productivity growth and accelerate scientific discoveries across various fields.


CITATIONS:

RESPONSE 1:

[1] https://p4sc4l.substack.com/p/ai-about-the-idea-that-ai-development

[2] https://www.fromthenew.world/p/diminishing-returns-in-machine-learning

[3] https://www.accountica.com.au/blog/ai-and-the-law-of-diminishing-returns-balancing-innovation-and-market-dynamics/

[4] https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a/comments

[5] https://www.lowyinstitute.org/the-interpreter/don-t-believe-hype-many-futures-ai

[6] https://klu.ai/glossary/scaling-laws

[7] https://spectrum.ieee.org/deep-learning-computational-cost

[8] https://www.aei.org/economics/will-breakthroughs-or-bottlenecks-be-the-bigger-story-with-ai-this-year/

[9] https://time.com/6300942/ai-progress-charts/

[10] https://www.exponentialview.co/p/ev-469

[11] https://clehaxze.tw/gemlog/2024/02-05-are-we-seeing-diminishing-returns-on-llm.gmi

[12] https://news.ycombinator.com/item?id=35552465

[13] https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a

[14] https://finance.yahoo.com/news/nobel-laureate-paul-romer-sees-093000071.html

[15] https://aclanthology.org/2023.findings-acl.82/

[16] https://news.ycombinator.com/item?id=40034680

[17] https://www.reddit.com/r/ChatGPT/comments/18ys7bb/are_chatgpt_and_other_llms_running_into/

[18] https://news.ycombinator.com/item?id=30735550

[19] https://www.dhirubhai.net/pulse/diminishing-returns-deploying-local-large-language-models-barton

[20] https://arxiv.org/abs/2205.11342

RESPONSE 2:

[1] https://echocraftai.com/nuclear-fusion-and-ai/

[2] https://www.dhirubhai.net/pulse/power-ai-advancing-nuclear-fusion-revolutionizing-path-clean-energy

[3] https://www.iaea.org/newscenter/news/seven-ways-ai-will-change-nuclear-science-and-technology

[4] https://www.reddit.com/r/fusion/comments/1157yfb/will_ai_allow_us_to_speedrun_nuclear_fusion/

[5] https://www.frontiersin.org/articles/10.3389/frsps.2024.1392128/full

[6] https://www.cnn.com/2024/03/26/climate/ai-energy-nuclear-fusion-climate-intl/index.html

[7] https://cset.georgetown.edu/publication/ai-chips-what-they-are-and-why-they-matter/

[8] https://azariangrowthagency.com/understanding-fine-tuning-its-importance-techniques-and-key-use-cases/

[9] https://fedscoop.com/how-ai-and-the-cloud-are-accelerating-scientific-discoveries-will-government-be-ready/

[10] https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-in-science_a8d820bd-en

[11] https://engineering.princeton.edu/news/2024/02/21/engineers-use-ai-wrangle-fusion-power-grid

[12] https://erc.europa.eu/sites/default/files/2023-12/AI_in_science.pdf

[13] https://www.dhirubhai.net/pulse/fine-tuning-generative-ai-aniruddh-kendurkar-txgwf

[14] https://www.datacenters.com/news/enhancing-chip-manufacturing-with-artificial-intelligence

[15] https://letsdatascience.com/the-art-of-fine-tuning-ai-models/

[16] https://hellofuture.orange.com/en/iot-new-energy-efficient-chips-could-expand-the-scope-of-artificial-intelligence-in-edge-computing/

[17] https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/

[18] https://datascientest.com/en/fine-tuning-what-is-it-what-is-it-used-for-in-ai

[19] https://www.oecd.org/publications/artificial-intelligence-in-science-a8d820bd-en.htm

[20] https://www.nationalacademies.org/news/2023/11/how-ai-is-shaping-scientific-discovery

RESPONSE 3:

[1] https://www.elibrary.imf.org/view/journals/001/2021/166/article-A001-en.xml

[2] https://www.britannica.com/technology/history-of-technology

[3] https://business.columbia.edu/sites/default/files-efs/imce-uploads/Joseph_Stiglitz/Technological%20Progress,%20Artificial%20Intelligence%20and%20Inclusive%20Growth.pdf

[4] https://cpram.com/lux/en/individual/publications/megatrends/artificial-intelligence-how-to-think-about-the-ai-impact-on-the-economy

[5] https://www.sciencedirect.com/science/article/abs/pii/S0040162520309689

[6] https://en.wikipedia.org/wiki/Accelerating_change

[7] https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-world/

[8] https://en.wikipedia.org/wiki/History_of_technology

[9] https://unctad.org/publication/impact-rapid-technological-change-sustainable-development

[10] https://study.com/learn/lesson/technology-history-timeline.html

[11] https://builtin.com/artificial-intelligence/artificial-intelligence-future

[12] https://ourworldindata.org/technological-change

[13] https://www.oecd-ilibrary.org/science-and-technology/the-impact-of-artificial-intelligence-on-productivity-distribution-and-growth_8d900037-en

[14] https://www.newthingsunderthesun.com/pub/oobot2tp/release/12

[15] https://ourworldindata.org/technology-long-run

[16] https://softwaremill.com/technology-trends/

[17] https://www.corbinball.com/article/29-futurism/263-60yeareventtechtimeline

[18] https://www.discovertec.com/blog/evolution-of-technology

[19] https://www.ey.com/en_us/insights/ai/past-tech-disruptions-inform-economic-impact-of-ai


Jean-Pierre Joos

Projectleider Imperfect | App voor persoonlijke ontwikkeling & zingeving voor jongeren

9 个月
Dean DiCarlo

CEO at Homing

9 个月

Love following along with you on this and I love your unique thinking. Thanks for helping me stay in the loop! So busy with Homing. The rules changes I've been anticipating occur in August! We should reconnect, I'd love to share share how we are using AI to drive our alternative transaction model!

Aragorn Meulendijks

?? Artificial Intelligence Thinker & Speaker | ?? Future-Historian | Technology, Longevity & Singularity?? | linktr.ee/itsmrmetaverse ???????

9 个月
回复
Woodley B. Preucil, CFA

Senior Managing Director

9 个月

Aragorn Meulendijks Very Informative. Thank you for sharing.

要查看或添加评论,请登录

Aragorn Meulendijks的更多文章

社区洞察

其他会员也浏览了