Twenty AI Predictions

Twenty AI Predictions

It’s time to make quite possibly lame predictions about generative AI (GenAI) and AI. I am committing these here so that years from now someone can buy me a nice dinner when I am proved right. Or vice verse. Or have me committed. These predictions might be a bit technical, but still approachable, I hope, by the lay person. More importantly, these aren’t entirely my predictions, but collections of trend lines picked up from various sources. For the AI experts this is an easy read and probably easy to poke holes in, so let’s get going and yes, please poke holes.

1. GenAI will become increasingly modular

We are already seeing this. ChatGPT 4 is constructed of 8 models , with modules specialized for certain tasks in what is called an ensemble of experts design. We can all access hundreds of thousands of AI and GenAI models in HuggingFace and in our own AI platform at UC San Diego, we are using several different models depending on the task. Human tasks are complex and coordinating multiple specialized AI modules ? is very likely to be superior to a monolithic AI/GenAI architecture. The biological brain works this way (it is both hierarchical and modular ) and the brain has been the model upon which the artificial neural networks have been based. I think it is a safe bet that a greater number of coordinated modules is likely to be better than one big module to rule them all. It is also likely that some form of competition (you could think of vector similarity searching as a simple form of that) will continue to be used to coordinate multiple models.

2. Models will get smaller

Bigger doesn’t mean always better . For vertical AI applications that are deployed inside a company for competitive advantage, LLMs need to be more factually correct than they are expressive. To conserve costs, organizations implementing GenAI will likely choose smaller models and leverage larger embeddings and prompt tuning (P-tuning ) to achieve the quality levels needed.

3. Models will be distributed and will need to collaborate with each other

Not all models will be super large and live in the cloud or larger data centers. Just about every device manufacturer will incorporate chips specialized for distributed GenAI applications where artificial neural networks (ANNs) on the device will cooperate with a much larger model in a centralized environment. With every software vendor embedding AI within their solutions, organizations using these software products will be figuring out how to integrate data between the software products, where it makes sense and to enhance the AI’s performance.

In the short run, embeddings, prompt tuning and responses shared between models will be common. In the long run, we will see a plethora of AI and GenAI models in use in most enterprises. And eventually the ecosystem is likely to find ways to build a very large ensemble of more loosely-coupled cooperative models. I personally think the road to artificial general intelligence is paved with modular, distributed and coordinated models. I also think there may be standards for AI model interoperability established at some point. The one word of caution is that large AI companies will want to have one AI to rule them all. Hopefully, the fragmented and competitive ecosystem and public policy we have today can keep in in check the rise of a monopoly.

4. Chips will get faster and consume less energy

The current crop of GPUs are just too expensive and energy intensive to do this at very large scales and certainly for most enterprises. The race is on to see whose less-expensive and faster GenAI chip will reign supreme. Many have noted the need for more efficient chips and different chip designs from AWS, Google, IBM and others have found their way into the market. In a few more years, I suspect we will see supremely energy efficient neurotrophic chips start entering in the market . The gotcha here is that establishing a chip fabrication facility takes many years and many billions of dollars. These manufacturing facilities will be scarce.

5. Prices will drop

Currently, the cost of GenAI is quite exorbitant with what can be described essentially as rationing occurring due to super-high demand. Among my peer CIOs many of us have gasped over the high cost of inference and training. The current enterprise and personal pricing is unsustainable for long-term adoption. Over time, I see GenAI is quickly becoming a commodity service. For the world to embed GenAI in everyday life and work environments, prices will have to come down. The race will be on to see who can commoditize this the best and the quickest. As prices drop, we will see personal and corporate purchases increase, replacing some and augmenting other IT purchases. If prices don’t drop, GenAI will fizzle way like many other IT trends into reasonable but more modest adoption.

6. More walled gardens of data will force revenue to be shared

Right now, GenAI needs more authentic human created content. AI that trains on content created by AI goes mad (Model Autophagy Disorder , model collapse ). Current models are frozen at a point in time and need periodic retraining. New content is then added into the model during the retraining. Content creators and providers know this and will put up more walled gardens to prevent GenAI from stealing their content and remixing it. Because of this, the revenue from GenAI will need to be shared with content owners. Current copyright and IP law will likely need revision to address the power of GenAI to easily create derived works. The fight between the AI companies and the content creators and distributers will continue.

7. GenAI providers will personalize services

Personalization in chat AI today comes through saving what users type in the chat window or by users typing in additional personal content or preferences to guide the AI. This personal content can be easily matched with the larger mass of content made available by the GenAI. This is the essence of how GenAI works. In time, all of this can be preserved within GenAI’s long-term memory. I suspect as models get smaller, GenAI service providers may have personal models for each user’s device or workspace. Either way, there is no reason that the emerging GenAI ecosystem won’t allocate each user a portion of its capabilities to learn users’ preferences.

Who will own this personal data and the personal model in such a 1:1 personalization approach? How will advertisers, governments, and law enforcement gain access to the personalization preferences? I think individuals will need to own the content they provide, and perhaps any smaller personal models produced and be free to keep them quite private. Contracts are already going in this direction to encourage adoption, and they will need to firmly stay moving in this direction in the future. Laws governing privacy will be strengthened to clarify ambiguities that AI and GenAI technology creates.

8. Transactional systems will have vector twins

There exist vast troves of authentic content within corporate transactional and knowledge-based systems. For cybersecurity, intellectual property rights and privacy reasons most of this data is under tight control. I suspect enterprise software vendors are likely to make real-time vector twins for this largely relational or highly structured data. By converting these data into vectors (a type of data structure), the data can be readily searched and used by GenAI modes. After all, the data in a financial system contains many more words that describe the numerical data than the data itself.

This high-quality and labeled data will be fed into GenAI models to provide high-quality dialogues with corporate employees. Most organizations are focused on general content found in web sites and lightly structured documents. A real-time, very large and very high-speed vector database suited for corporate transactional systems will help improve the quality of GenAI inside the enterprise. The current crop of GenAI models need augmented “fact retrieval” to eliminate hallucinations. Wolfram Alpha’s model is designed just for this purpose and today has a plug-in available for ChatGPT.

9. Most user interfaces will become conversational

GenAI prompt engineering is quickly becoming a dominant form of general application development. Could someone clever create a spreadsheet tool that works via conversation instead of typing and mouse clicks? I strongly suspect so. It might be that in the not-so-distant future, Microsoft Excel may look as weird as an old mechanical calculator. Why type a formula when you can tell the GenAI to do the same? Why type in data from another source when you can ask the GenAI to fetch it? Why write code when you can ask the AI to write it for you? UI experts now will be busy analyzing user tasks, transforming it, and then matching parts of the task to a conversational approach.

10. BI, business workflow, event and alerting will change significantly

If you extend the prior two predictions just a bit, you can see that the current suite of business intelligence tools will need to evolve. While certainly products like Tableau, Cognos, Power BI and others will use GenAI to help to help users explore data in a conversational way . Over time, this can be extended to do away with the workbook itself. Just ask the GenAI to create a visual without needing to save the workbook to hold it. Perhaps more simply, GenAI can be used as a pattern detection tool to create alerts and notify humans or other systems to act. Just as current robotic process automation (RPA) tools serve as a kind of odd integration tool (which they can do with automated data entry routines), so too can GenAI serve as fast and easy, although approximate pattern matching tool for system-to-system communication.

11. AI will require a new type of lawyer or detective

Just as we need prompt engineering to coax out of an LLM correct answers, some lawyers will need become prompt engineers to determine what a GenAI did. This could give birth to a new occupation: the AI forensic psychologist — someone who must interrogate the AI to learn why it did what it did. Is this a Blade Runner Voight-Kampff moment?

12. AI startups will evaporate

It is hard to see how so many AI startups can survive. Each of these firms has so little room to define what is unique and defendable in the aggressive competition of the marketplace. I suspect the best path forward for winning startups is to develop robust solutions within industry verticals, taking advantage of process and data knowledge unique to an industry vertical. In this regard, the AI startups will probably track similarly to the enterprise and ERP software markets and in time, get acquired by these firms.

The IT industry has seen two trends continue: lower barriers to entry for startups and greater consolidation of large players, which causes startups to either get acquired or go out of business. So far, the GenAI market appears to have both lower barriers to entry for most startups along with a bigger chance of getting bought out by a bigger player. This results in a sort of big-fish-eats-little-fish fight for dominance as startups get absorbed or disappear. What follows is a sort of steady state where a few large players continue to buy whatever startups they desire, leaving the rest to wither away. Expect that to happen here.

13. Very good incremental training will be the next game changer

Once large language models and other AI models master incremental training, everything changes. Everything. Incremental training would let providers continuously train their models on new data as the new data arises without worrying too much about catastrophic forgetting (where the AI loses an unsettling large percent of its prior skills). This would provide stable performance and it would fantastically lower the cost of training AI models to keep them up to date. While developing small models as part of prompt tuning or fine tuning can help, I think the holy trinity of tremendous AI growth are: 1) neuromorphic chips or chip designs that consume orders of magnitude less energy than current chips; 2) AI modularity; and 3) incremental training.

When this happens, buckle up. Or be afraid. Or both. With regard to incremental learning and minimization of catastrophic forgetting, some things have been percolating: low rank adaptation (which is also an example of modularity), prompt-tuning artificial dendritic neurons , simulation of biological sleeping , among others. For some time now scientists have been very aware of the need for incremental training, but also have been grappling with the larger concept of lifelong learning for ANNs. In time, these problems will find workable solutions. Incremental learning, if combined with collaborative modularity between companies could reshape the IT industry. Imagine LexisNexis, with its industry solution, partnering with OpenAI for general linguistic skill, but also with companies and law firms who may create their own models. AI model interoperability?

14. Prompt engineering is dead. Long live pipeline engineering. Or orchestration. Or whatever the term is.

OpenAI CEO Sam Altman has commented that prompt engineering is a temporary phenomenon. It is very likely that before long we will look back at this day of prompt engineering as the awkward age of AI. Its teenage years. It is kind of crazy to think you must verbally irritate and coax the GenAI to the results you want. Well, for those of us who have raised kids through their teenage years, maybe not. But even if prompt engineering as we know it goes away, if you consider modularity and distributed and collaborative AI as part of our future, the need to orchestrate pipelines and communicate data things grows in importance. Recapitulating brain biology, pipeline engineering will become the new white matter connecting all the AI neuronal grey matter out there. In addition, part of prompt and pipeline engineering involves reconceptualizing and solving the problem at hand. Given the burgeoning complexity here, I suspect that this the need to integrate AI modules will be around for a while.

15. Deep fakes will spike then cool down

Maybe I am too optimistic or na?ve here, but I think of human trust as a form of capital stock. People place trust in things they find trustworthy. If people find certain things frequently untrustworthy, they will place their trust elsewhere. We don’t lose trust, we just stop spending it on some things. People will work together and get savvy about better detecting untrustworthy things. It’s part of what we do as human beings. But it will take some time for people to learn and until then, deep fakes will wreak havoc and create too much personal harm.

16. AI will grow some economies, disrupt jobs, and make politics worse

As many are saying, your job won’t be taken by an AI, but by another person using AI. The economy’s constant creative destruction accelerates somewhat with AI and GenAI. New jobs have already been created and more are coming. Some jobs are going to diminish and fade away. This will create personal difficulty for people negatively affected. Automated economies will be impacted more so than economies based on manual labor. Productivity is likely to increase , creating economic growth but not equally for all economies. Despite the productivity boosts to some, I doubt this will ease the great decoupling between productivity and labor. The skills and wealth gap between the technology haves and have-nots will widen further making politics worse. Countries will need public policy to ameliorate these shocks. This trend concerns me most.

17. Conversations about superintelligence will be frequent and lively

For decades now I have thought superintelligence impossible. Now I am not so sure. It is mathematically possible within our kids’ lifetimes. A few odd and maybe ill-informed questions stick out. For one, how will we know if an AI is superintelligent? What test would we give? How would we know it isn’t deceiving us? Is the difference between us and a superintelligent AI akin to the difference between a dog and a human? Or will our level of intelligence be further apart? And if this is the case, then all speculations by us mere mortals are about as equally good, no? I mean, some dogs are known for their higher intelligence over others, but I suspect compared to a superintelligent AI, all humans are about as equally dim. See here and here for examples of an interesting discussion.

With an obtuse nod to Hitchhiker’s Guide to the Galaxy, I think a superintelligent AI would consume all data known and fall asleep finding humans quite boring. Others think there would be a chance for great harm to come to humans. While I do not know what a superintelligent AI would do, I certainly know what one group of humans would do upon learning that another group of humans was about to give birth to a superintelligent AI that could harm them. This is utterly predictable. And frightening to think about. In the meantime, I will add a related prediction: In the coming decade, GenAI will start to exhibit traits of Artificial General Intelligence where the AI will perform many tasks at and exceeding human skill levels.

18. AI will not be applied everywhere and will take more time

In the world of dystopian future movies and literature, I find three economies impossible: a) an economy of abundance where energy is free (Trekonomics and see here for a good read); b) a zombie economy where all creatures do not require much and do not need employment; c) an economy of all robots and no human work. As far as we know, economies manifest human sociology. These are all interesting thought experiments, but for the foreseeable GENAI future, employers will continually compare human costs and AI costs. If AI remains too expensive or has insufficient benefit it will be curtailed or discarded. As soon as AI becomes quite inexpensive, it will be adopted quickly and cause some job loss which in turn may dampen the economy due to reduced consumer demand. In turn this can slow rates of investments and innovation. So, perhaps we can rule out excessive automation.

It will also take decades for AI to find its way into many business and consumer processes. People still resist change and old habits take a while to undo. The level of tailoring of AI required to address all the niches in the ecosystem is greater than many suspect and will take time. People, firms and markets change, but the rate of change is controlled by human adoption, which also takes time. Furthermore, when it comes to robots replacing all manual work, the pesky problems of energy costs persist. Humans are remarkably cheap compared to machines for quite some time. Economies with larger shares of manual labor are not expected to see the productivity gains AI may bring to other information-based economies. As companies evaluate the cost of AI vs the cost of human labor, they will find the AI cannot be reasonably deployed to reduce labor. The future will arrive at different times in different places.

19. AI will be used to create law

We can be sure that it is happening now. A LexisNexis survey shows that about half of their respondents believe GenAI will have a significant impact on their profession, especially for summarization. Most legislative staffers are young and are likely to use GenAI to help them craft what becomes law. I do not have any idea what this will do to law, except to say that there is a slight chance that GenAI’s “concision” capabilities (ability to summarize) may help some legislators understand the policies the staffers and legal teams create. Is this frightening? Dunno, but GenAI’s ability to hallucinate might find its day in the Supreme Court as a point of contention around understanding legislative intent after discovering the hallucination in the law passed or in the legislative record. We’ve already had one lawyer submit errant citations in a case. Why would we not expect other errors in the creation of laws in the future?

20. AI must remain open with its innards fully revealed

This last item is not a prediction, but a hopeful mandate, so bear with me. Hear goes. AI companies and investors have poured in hundreds of billions of dollars into AI and these investors expect that their investments will produce a form of AI that cannot be easily reproduced elsewhere and hence will make them rich. While I am heartened by the research and entrepreneurial community’s ability to innovate quickly to keep well-funded AI companies on their toes, too much is at stake for these large firms. They must close off insight into how their models work. The must buy out would be competitors. This will not be good for the researchers and the public.

The evolution of artificial neural networks is continuing to follow patterns from biological neural networks in brains. Functional modules in biological brains like attention, long-term memory, working memory, and incremental learning are finding their homologues in ANNs. It is as if there now exists a “universal neuron” or a universal neural architecture that mimics biological neural architecture’s capabilities. We have stumbled upon this pattern of convergent evolution. As this continues, the innards of the artificial brain should be as inspectable as the biological brain. This knowledge must be a common good that benefits all. If the AI innards remain a black box, at the very least the most disadvantaged people will never see the benefits, and at the worst, a billionaire with his mind bent on global domination might use AI to ruin his enemies, powerful or weak. It has happened before, and it will happen again. Vigilance will be necessary.

I think enough cement has been poured around AI that the future is fairly certain. I mean, what more could possibly change? What say you? Curious minds want to know…

?

Simon Buckingham Shum

Professor of Learning Informatics / Director, Connected Intelligence Centre, UTS

1 年

Helpful thoughts thanks! On this: "It is very likely that before long we will look back at this day of prompt engineering as the awkward age of AI. Its teenage years. It is kind of crazy to think you must verbally irritate and coax the GenAI to the results you want." Yes indeed: my version of this: "We see the rise of this new art as people delight in figuring out ways to make ChatGPT do their bidding, and set themselves up as Prompt Engineering gurus. This is fun while we all play — but if you need to do serious work, the idea that you need to approach your AI assistant with guile and cunning — as though they’re a tetchy colleague you have to manipulate to get them to cooperate — seems odd to say the least. While learning to control the output of language models is certainly a form of AI literacy, the need for “prompt engineering”?may be consigned in the?history books to a curiosity associated with the earliest releases, as people sought to use the chatbot not just for conversation, but as a practical creative tool. A command line interface with highly unpredictable output is not the optimal user interface for co-creation." https://simon.buckinghamshum.net/2023/03/the-writing-synth-hypothesis

Observation about #9: If all interfaces to data are intelligent, control over those interfaces is control over data. What stops, government for example from demanding back-end control over interface AIs when they touch on some topics? Why wouldn't it wnt to simply making data it does not like disappear or replace it with government-massaged data which supports a partly or completely false narrative that government wishes to propigate. This is compounded the more AIs are linked and modularized since government need only corrupt a few strategic AI nodes to have the effect they desire. Substitute any large or powerful entity with control over large data stores. All they need is a reason to alter or cnceal data and AI will do it very well.

Joel Dehlin

Chief Executive Officer at Kuali

1 年

Love it. Four more: 21. The focus on SEO will shift to a focus on optimizing content for AI. 22. Artists will bifurcate to those who use AI and those who avoid creating digital or digitizing content. 23. We will debate ethical treatment of AI. Maybe not this year or next year, but sooner than many think. 24. The next cults will be built by and around AI.

Vishal Singh

Health & Human Services and Healthcare Leader | Management Consulting Leader | DEI Champion

1 年

Interesting insights. Most do seem plausible, two already proving to be accurate.

要查看或添加评论,请登录

Vince Kellen, Ph.D.的更多文章

  • Will AI Become Freely Willful?

    Will AI Become Freely Willful?

    Discussions regarding AI and consciousness, superintelligence, sentience and any related concepts get quite sticky and…

    3 条评论
  • Is GenAI Too Expensive?

    Is GenAI Too Expensive?

    Many of my CIO peers in multiple industries are struck by how expensive GenAI is. Many companies are advancing, but…

    8 条评论
  • Miswanting: The Parable of the Raft

    Miswanting: The Parable of the Raft

    In Buddhism, The Parable of the Raft is quite famous. My 30 years of experience in the martial arts exposed me to this…

    6 条评论
  • Two Big Problems for AI: Alignment and Reification

    Two Big Problems for AI: Alignment and Reification

    AI is not a universe unto itself untethered from social realities. It’s a product, or conjuring up, of the social…

    8 条评论
  • Can We All Take an AI Breath Now?

    Can We All Take an AI Breath Now?

    Last year was the year of hyperventilating. Generative AI burst on the scene late 2022 and at the start of 2023 surely…

    8 条评论
  • My Body, My System

    My Body, My System

    Have you ever bought a car and noticed the side mirrors stuck out a little bit more than your old car? Pulling in and…

    9 条评论
  • Will AI Improve Organizational Decision Making?

    Will AI Improve Organizational Decision Making?

    As much as it is human nature to change, it is also our nature to remain the same, both for individuals and…

    6 条评论
  • AI and the Time Value of Knowledge

    AI and the Time Value of Knowledge

    Everybody wants to be the first to know. Perhaps it is human nature to always want to know things, especially important…

  • The Future of Education, Horizontal and Vertical AI and Knowledge Flow

    The Future of Education, Horizontal and Vertical AI and Knowledge Flow

    In the astute words of Tyler Cowan, generative AI (GenAI) makes knowledge more available to people. In my mind…

    8 条评论
  • The Parable of the Bike Race

    The Parable of the Bike Race

    I finally had the chance to catch up with a long-time colleague of mine for lunch a few months back on a cold, wintry…

    5 条评论

社区洞察

其他会员也浏览了