What to expect from Generative AI for business in 2024
As we reflect on 2023, it wouldn't be an overstatement to assert that technology, particularly in the domain of AI, has progressed by a decade within a single year. Generative AI has significantly influenced the industry, triggering substantial competition among major players striving to excel in the development of superior LLMs. This competitive atmosphere serves as a positive force, propelling advancements in the quality and potential of AI. According to Gartner , by 2026, it is projected that over 80% of enterprises will have integrated generative artificial intelligence APIs or models. Additionally, the implementation of GenAI-enabled applications in production environments is expected to surge, marking a noteworthy increase from the modest 5% reported in 2023.
But beyond this initial boom, we enter a phase where, for this trend to solidify, the use of generative AI needs to evolve in terms of its practical (and secure) application. Despite the enthusiasm, the truth is that enterprises are slow to adopt commercial LLMs due to several uncertainties.
I have been working with AI for over 12 years and have witnessed some of the "cycles" that come with the adoption of AI in businesses. In 2023, I had the privilege of talking to many of our clients, listening to their expectations, and understanding their main concerns about LLMs. From my perspective, here are some (not all) crucial aspects for the consolidation of generative AI in the business scene in 2024:
Response accuracy
The potential of Large Language Models and Generative AI to comprehend, transform, and generate texts is well-known and has significantly impacted society and business. However, there is a major concern regarding the accuracy of responses, especially concerning context and up-to-date information. The most obvious solution is the already popular prompt engineering, where you provide instructions to the model on how to act.
Nevertheless, in the corporate domain, many companies also need to go through the process of what is termed Fine-tuning. This process enhances learning by training on a more extensive set of examples than can fit in the initial prompt,enabling better results across a wide range of tasks. Once a model has been fine-tuned, the need for providing numerous examples in the prompt diminishes. This not only reduces costs but also enables quicker responses with lower latency, enhancing operational efficiency
Another approach to address this accuracy concern is using what is known as RAG. Retrieval Augmented Generation, or RAG, is a technique that combines the power of pre-trained large language models with the ability to retrieve information from external sources. Essentially, RAG is a framework that bridges the gap between the pure use of generative AI models and those use cases where the company needs to leverage data from a predefined dataset. It combines both approaches, enhancing the capability to produce coherent and contextually appropriate responses (learn more in my article about RAG ).
IBM watsonx.ai is specifically dedicated to offering companies an advanced enterprise platform that empowers them to train, validate, fine-tune, and deploy generative AI models. Beyond leveraging IBM's pre-trained foundation models, companies also have the flexibility to leverage open-source models and fine-tune them to align with their specific requirements. A true enterprise AI platform.
?
Security and Privacy
Undoubtedly, security and privacy are the primary concerns for companies. In fact, according to another report , 75% of surveyed companies have no intentions of using commercial LLMs in production, citing data privacy concerns as the main reason
领英推荐
I heard from several companies exploring the adoption of Large Language Models in 2023 that they have reservations about sending their data to third-party entities. They are concerned about the potential misuse of generated content and are uncertain about whether their data is used for retraining or shared with other companies utilizing the API. Additionally, there is a lack of trust and understanding regarding how these data are protected, with fears of possible data breaches.
It's truly reassuring to note that IBM's approach to AI model development is guided by core principles emphasizing trust and transparency, crucial for ensuring responsible AI practices . Within the IBM watsonx platform we empower efficient management of the complete AI model lifecycle. This entails centralized control over all tools and runtimes, streamlining processes for training, validation, tuning, and deployment of AI models across diverse cloud and on-premises environments, ensuring an integrated and efficient approach to our AI initiatives
Besides that, ?your work with foundation models on watsonx ensures a high level of security and privacy. The foundation models, hosted on IBM Cloud, guarantee that your data remains within the IBM ecosystem and is not transmitted to third-party or open-source platforms. When you create or send prompts through the Prompt Lab or API, they are exclusively accessible by you and are used solely for the models of your choice. IBM and other entities do not access or utilize your prompt text. You retain control over the saving of prompts, model choices, and prompt engineering parameters, with stored data encrypted both at rest and in motion within a dedicated IBM Cloud Object Storage bucket associated with your project. The flexibility to delete your stored data at any time further emphasizes the user's control over their information
Scarcity of skilled professionals?
There is much talk about the jobs that generative AI could eliminate, but little attention is given to the numerous jobs being created that remain unfilled due to a lack of qualified professionals. As organizations begin to define their objectives for Generative AI, the demand for individuals well-versed in gen AI concepts is on the rise.
While generative and other practical AI tools demonstrate their value to early adopters, there exists a significant gap between the available workforce possessing these skills and the growing demand. To address this challenge, organizations should concentrate on improving their talent management capabilities, cultivating positive work experiences for the adept gen AI-literate workers they hire, with the goal of not only attracting but also retaining this valuable talent in a competitive job market.
?In addition, labor disruptions may signal an unprecedented need for reskilling displaced workers, necessitating a substantial boost in retraining capacity. Change is happening, and fast. Research from?IBM’s Institute for Business Value ?(IBV) finds that executives estimate about 40% of their workforce will need to reskill over the next three years due to AI and automation. It is undoubtedly essential to establish public-private partnerships to address this requirement
Speaking of training, IBM offers a free education program called IBM SkillsBuild . This program enables learners worldwide to access AI education developed by IBM experts, offering the latest insights into cutting-edge technology developments.
As you can see, the potential is enormous, but challenges do exist and need to be addressed with seriousness. From my side, I can ensure to you that I have been tirelessly working alongside a team of IBM experts to adopt foundation models and generative AI, aiming to revolutionize the experience for our Business Analytics users
?