Generative AI: Anticipating the Future of Intelligent Innovation
Sourav Rout, MBA, MS
Digital Transformation Leader | Driving Sustained Business Transformation through Digital Innovation, Process Excellence, Artificial Intelligence and Strategic Innovation
Quickly growing from a niche project in a few tech companies to a global phenomenon for business and professional users alike, generative AI is one of the hottest technology initiatives of the moment – and won’t be giving up its spotlight anytime soon.
Furthermore, generative AI is evolving at a stunningly rapid pace, enabling it to address a wide range of business use cases with increasing power and accuracy. Clearly, generative AI is restructuring the way organizations do and view their work.
With both established tech enterprises and smaller AI startups vying for the next generative AI breakthrough, future prospects for generative AI are changing as rapidly as the technology itself. For better understand its future, this guide provides a snapshot of generative AI’s past and present, along with a deep dive into what the years ahead likely hold for generative AI.?
Generative AI’s Future: 8 Predictions
Looking ahead, expect to see generative AI trends focused on three main pools: quick and sweeping technological advances, faster-than-expected digital transformations, and increasing emphasis on the societal and global impact of artificial intelligence. These specific predictions and growing trends are most likely on the horizon:
1. Growth in Multimodality
Multimodality — the idea that a generative AI tool is designed to accept inputs and generate outputs in multiple formats — is starting to become a top priority for consumers, and AI vendors are taking notice.
OpenAI was one of the first to provide multimodal model access to users through GPT-4, and Google’s Gemini and Anthropic’s Claude 3 are some of the major models that have followed suit. So far though, most AI companies have not made multimodal models publicly available; even many who now offer multimodal models have significant limitations on possible inputs and outputs.
In the near future, multimodal generative AI is likely to become less of a unique selling point and more of a consumer expectation of generative AI models, at least in all paid LLM subscriptions.
Additionally, expect multimodal modeling itself to grow in complexity and accuracy to meet consumer demands for an all-in-one tool. This may look like improving the quality of image and non-text outputs or adding better capabilities and features for things like videos, file attachments (as Claude has already done), and internet search widgets (as Gemini has already done).
ChatGPT currently enables users to work with text (including code), voice, and image inputs and outputs, but there are no video input or output capabilities built into ChatGPT. This may change soon, as OpenAI is experimenting with Sora, its new text-to-video generation tool, and will likely embed some of its capabilities into ChatGPT as they have done with DALL-E.
Similarly, while Google’s Gemini currently supports text, code, image, and voice inputs and outputs, there are major limitations on image possibilities, as the tool is currently unable to generate images with people. Google seems to be actively working on this limitation behind the scenes, leading me to believe that it will go away soon.
2. Wider Adoption of AI as a Service
AI as a service is already growing in popularity across artificial intelligence and machine learning business use cases, but it is only just beginning to take off for generative AI.
However, as the adoption rate of generative AI technology continues to increase, many more businesses are going to start feeling the pain of falling behind their competitors. When this happens, the companies that are unable or unwilling to invest in the infrastructure to build their own AI models and internal AI teams will likely turn to consultants and managed services firms that specialize in generative AI and have experience with their industry or project type.
Specifically, watch as AI modeling as a service (AIMaaS) grows its market share. More AI companies are going to work toward public offerings of customizable, lightweight, and/or open-source models to extend their reach to new audiences. Generative AI-as-a-service initiatives may also focus heavily on the support framework businesses need to do generative AI well. This will naturally lead to more companies specializing and other companies investing in AI governance and AI security management services, for example.
3. Movement Toward AGI and Related Research
Artificial general intelligence, which is the concept of AI reaching the point where it can outperform humans in most taskwork and critical thinking assignments, is a major buzzword among AI companies today, but so far, it’s little more than that.
Google’s Deepmind is one of the leaders in defining and innovating in this area, along with OpenAI, Meta, Adept AI, and others. At this point, there’s not much agreement on what AGI is, what it will look like, and how AI leaders will know if they’ve reached the point of AGI or not.
So far, most of the research and work on AGI has happened in silos. In the future, AGI will continue to be an R&D priority, but much like other important tech and AI initiatives of the past, it will likely become more collaborative, if for no other reason than to develop a consistent definition and framework for the concept. While AI leaders may not achieve true AGI or anything close to it in the coming years, generative AI will continue to creep closer to this goal while AI companies work to more clearly define it.
4. Significant Workforce Disruption and Reformation
Most experts and tech leaders agree that generative AI is going to significantly change what the workforce and workplace look like, but they’re torn on whether this will be a net positive or net negative for the employees themselves.
In this early stage of workforce impact, generative AI is primarily supporting office workers with automation, AI-powered content and recommendations, analytics, and other resources to help them get through their more mundane and routine tasks. Though there is some skepticism both at the organizational and employee levels, new users continue to discover generative AI’s ability to help them with work like drafting and sending emails, preparing reports, and creating interesting content for social media, all of which saves them time for higher-level strategic work.
Even with these more simplistic use cases, generative AI has already shown its nascent potential to completely change the way we work across industries, sectors, departments, and roles. Early predictions expected generative AI would mostly handle assembly line, manufacturing, and other physical labor work, but to this point, generative AI has made its most immediate and far-reaching impacts on creative, clerical, and customer service tasks and roles.
Workers such as marketers, salespeople, designers, developers, customer service agents, office managers, and assistants are already feeling the effects of this technological innovation and fear that they will eventually lose their jobs to generative AI. Indeed, most experts agree that these jobs and others will not look the same as they do now in just a couple of years. But there are mixed opinions about what the “refactored” workforce will look like for these people — will their job simply change or will it be eliminated entirely?
With all of these unknowns and fears hanging in the air, workplaces and universities are currently working on offering coursework, generative AI certifications, and training programs for professional usage of AI and generative AI. Undergraduate and graduate programs of AI study are beginning to pop up, and in the coming months and years, this degree path may become as common as those in data science or computer science.
5. Increasing Regulatory, Ethical, and Societal Pressures
In March 2024, the EU AI Act that had been discussed and reviewed for several years was officially approved by the EU Parliament. Over the coming months and years, organizations that use AI in the EU or in connection with EU citizen data will be held to this new regulation and its stipulations.
This is the first major regulation to focus on generative AI and its impact on data privacy, but as consumer and societal concerns grow; don’t expect it to be the last. There are already state regulations in California, Virginia, and Colorado, and several industries have their own frameworks and rules for how generative AI can be used.
On a global scale, the United Nations has begun to discuss the importance of AI governance, international collaboration and cooperation, and responsible AI development and deployment through established global frameworks. While it’s unlikely that this will turn into an enforceable global regulation, it is a significant conversation that will likely frame different countries’ and regions’ approaches to ethical AI and regulation.
领英推荐
6. Bigger Emphasis on Security, Privacy, and Governance
With the regulations already in place and expected to come in the future, not to mention public demand, AI companies and the businesses that use this technology will soon invest more heavily in AI governance technologies, services, and policies, as well as security resources that directly address generative AI vulnerabilities.
A small number of companies are focused on improving their AI governance posture, but as AI usage and fears grow, this will become a greater necessity. Companies will begin to use dedicated AI governance and security platforms on a greater scale, human-in-the-loop AI model and content review will become the standard, and all companies that use generative AI in any capacity will operate with some kind of AI policy to protect against major liabilities and damage.
7. Greater Focus on Quality and Hallucination Management
As governments, regulatory bodies, businesses, and users uncover dangerous, stolen, inaccurate, or otherwise poor results in the content created through generative AI, they’ll continue to put pressure on AI companies to improve their data sourcing and training processes, output quality, and hallucination management strategies.
While an emphasis on quality outcomes is part of many AI companies’ current strategies, this approach and transparency with the public will only expand to help AI leaders maintain reputations and market share.
So what will generative AI quality management look like? Some of today’s leaders are providing hints for the future.
For example, with each generation of its models, OpenAI has improved its accuracy and reduced the frequency of AI hallucinations. In addition to actually doing this work, they’ve also provided detailed documentation and research data to show how their models are working and improving over time.
On a different note, Google’s Gemini already has a fairly comprehensive feedback management system for users, where they can easily give a thumbs-up or thumbs-down with additional feedback sent to Google. They can also modify responses, report legal issues, and double-check generated content against internet sources with a simple click.
These features provide users with the assurance that their feedback matters, which is a win on all sides: Users feel good about the product and Google gets regular user-generated feedback about how their tool is performing.
In a matter of months, I expect to see more generative AI companies adopt this kind of approach for better community-driven quality assurance in generative AI.
8. Widespread Embedded AI for Better Customer Experiences
Many companies are already embedding generative AI into their enterprise and customer-facing tools to improve internal workflows and external user experiences. This is most commonly happening with established generative AI models, like GPT-3.5 and GPT-4, which are frequently getting embedded as-is or are being incorporated into users’ preexisting apps, websites, and chatbots.
Expect to see this embedded generative AI approach as an almost-universal part of online experience management in the coming years. Customers will come to expect that generative AI is a core part of their search experiences and will deprioritize the tools that cannot provide tailored answers and recommendations as they research, shop, and plan experiences for themselves.
Generative AI’s Recent Past Suggests Its Future
With how much has happened in the world of generative AI, it’s hard to believe that most people weren’t talking about this technology until OpenAI first launched ChatGPT in November 2022. Many of generative AI’s greatest milestones were reached in 2023, as OpenAI and other hopeful AI startups — not to mention leading cloud companies and other technology companies — raced to develop the highest-quality models and the most compelling use cases for the technology.
Below, we’ve quickly summarized some of generative AI’s biggest developments in 2023, looking both at significant technological advancements and societal impacts:
Generative AI: The Current Landscape
The generative AI landscape has transformed significantly over the past several months, and it’s poised to continue at this rapid pace. What is covered below is a snapshot of what’s happening with generative AI in early 2024; expect many of these details to shift or change soon, as that has been the nature of the generative AI landscape so far.
Though it has not been widely adopted in many industries, generative AI continues to build its reputation and gain important footholds with both professional and recreational user bases. These are some of the main ways generative AI is being used today:
Consumer Trust and Ethical Considerations
According to Forrester’s December 2023 Consumer Pulse Survey results, “only 29% agreed that they would trust information from gen AI” and “45% of online adults agreed that gen AI poses a serious threat to society.” In the same results, though, 50% believed that this technology could also help them to find the information they need more effectively.
Clearly, public sentiment on generative AI is currently very mixed. In North America, in particular, there’s excitement and interest in the technology, with more users experimenting with generative AI tools than in most other parts of the globe. However, even among those with enthusiasm for generative AI, there is a general caution about data security, ethics, and the general trust gap that comes with a lack of transparency, misuse and abuse possibilities like deepfakes, and fears about future job security.
To earn consumer trust, more ethical AI measures must be taken at the regulatory and company levels. The EU AI Act, which recently passed into law, is a great step in this direction, as it specifies banned apps and use cases, obligations for high-risk systems, transparency obligations, and more to ensure private data is protected. However, it is also the responsibility of AI companies and businesses that use AI to be transparent, ethical, and responsible beyond what this regulation requires.
Taking steps toward more ethical AI will not only bolster their reputation and customer base but also put in place safeguards to prevent harmful AI from taking over in the future.
Vice President Presales @ Elisa Polystar | Strong Customer Empathy
5 个月Probably in a few years, there will be a big disappointment with LLMs. AI is first and foremost about reasoning, not just guessing the next chunk of text. How does GenAI bring us closer to AGI?