How to integrate ChatGPT into your business; any business
Why do millions of individuals and companies use ChatGPT every day? Because it can generate?content that is indistinguishable from human-created output. It is also becoming a platform where many applications can be developed just like the iPhone is for apps developers. These applications allow us to automate our work and augment our capabilities. ChatGPT can produce new contents in seconds, saving time and resources. But it can also improve a business. For example, in the consumer sector, consumers are getting a virtual assistant that will help them to shop, train, design a diet, travel etc.
In this article, we will be looking at how to create a responsible ChatGPT tool for your business.?We will?explore some strategies and discuss the main topics CEOs, and top teams should consider when developing generative AI tools.?In particular:
1) what does it take to build a tool and what are its uses;
2) how to develop it; and
3) how to make it responsible and compliant.
The power of Generative AI?
ChatGPT is part of what is called generative AI. It is a type of Artificial Intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models. It does this by learning patterns from existing data sets, then using this knowledge to generate new and unique outputs. It can learn from any database, including your company’s, and that is why you can create customized tools for your own business.??
One example can be drawn from the HR sector. On average, people will change jobs 6 to 7 times in their career. Everytime a company needs to onboard new resources, the process is seldom a two-hour workshop and the delivery of a laptop. Among other things, it involves a customised training per function, geography and department. Generative AI can simplify many aspects of onboarding by automating them, and it can also augment the process by becoming an onboarding coach and buddy along the way. For example, with ChatGPT you can develop a tool that, in less than 1 minute, can transform a training manual of 1000 pages into an interactive document with questions and answers. The same can be done with video seminars and podcasts. The entire onboarding process can be redesigned and customized within minutes. Have you considered how ChatGPT could help your HR department or train your workforce on a new topic??
The opening of APIs?
The development of these tools is the result of three milestones. In November 2022, OpenAI released ChatGPT, the chatbot capable of answering any user's question. About a month later, it released public APIs to allow the use of ChatGPT in any type of application. Finally, In April, ChatGPT4 was released enabling developers to process a greater amount of data, thus opening the doors to more advanced applications based on this model.
But how does this exactly apply to your business? With the opening of public APIs, ChatGPT has allowed developers to harness its ability to process a large amount of data. A data scientist can now query ChatGPT automatically as many times as needed and use the obtained answers to generate more. But what's more important is that ChatGPT can pull information from a database owned by you and summarize the key concepts included in the document, just like we saw with the HR example.?
Imagine if, instead of just a 1000-page training manual, we asked ChatGPT to access the entire company database including management systems, CRMs, document repositories, customer care chats, email inboxes, etc. We could, for example, design a tool that automatically prepares a company presentation, or even better, that can fill up an application form.?Recently, we have been asked to help design a tool where a company can evaluate and bid to a tender in under 5 minutes. By scrolling through a tender document of over 100 pages, the tool is able to read and summarise the application requirements in a few seconds. If the company matches the requirements, the tool can then retrieve the company’s information from a shared database and automatically fill the application form. Imagine how long it would take a company’s resource to read a 100 pages document, summarise it, retrieve the information and fill the form.?
From curiosity to commitment
This seems all very interesting. But why should companies care about generative AI? Isn’t it just another fading fad? There have been many discoveries such as blockchain, crypto, web 3 but they have yet to reach mainstream. Why AI should be different??Let’s start by saying that AI will not change our lives overnight. According to the Financial Time “A boom in generative artificial intelligence and pandemic-induced workplace shifts will unleash a new era of faster productivity growth across the rich world, economists say, though it could take a decade or more for advanced economies to reap the full benefits.”
However, in the long run, economists believe the boom in investment into AI will eventually produce compelling results. AI will affect not only how we work –it will affect 80% of jobs - but also how we live (source McKinsey). In fact, it is accessible to anybody and the entry barrier is very low.
According to Udemy’s research carried out on 14,000 business customers globally, one of the leading platforms of online learning, the number of minutes spent on learning ChatGPT has increased 4,419% between the fourth quarter (Q4) 2022 and the first quarter (Q1) 2023. In addition, 470 new courses on generative AI have been added to the platform. In our opinion, AI is here to stay and the opportunities are real. This is also confirmed by how quickly it is spreading.?
Because of its strong capabilities, in just a few months, we have gone from playing with ChatGPT to building real tools. In fact, up to 3 months ago companies were still asking the question: “what is generative AI?” Now, they are using it to build applications for their business. We believe that now is the right time for companies to start experimenting in ways that we will explore below.?
But before diving into the nuts and bolts of how to integrate generative AI into a business, let’s look at some of the aspects you should consider when developing a tool. First, developing a tool is not something that can be improvised. There are real talent implications as it takes more than a generative AI model and a programmer to make it work across an organization. Second, there are real risks that need to be addressed and managed.?The development of AI has raised new questions about the best way to build fairness, interpretability, privacy, and safety into these tools. In fact,?applications will affect people’s work and lives. Finally, speed is the strategy and those who start acting and learning now will create strategic distance much faster.
Anyone can build generative AI tools?
Building a tool using generative AI is both simple and complicated. The simple part is that very often you do not need to reinvent the wheel. The muscle part of the work will be done by an existing generative AI model. However, to truly create a good tool merely tinkering with various generative AIs is insufficient. Since everyone can access ChatGPT, the access to the API itself is not enough to create a useful tool. Virtually anyone who can access the API can create it, but it might be too generic for a specific use. This is true also for IOS or Android apps. Anyone can design an app, but very few have been successful at designing good apps. The myth that you just need to have a good programmer, aka a good machine learning engineer, is just a myth.?
The integration of AI into a company's business is a multifaceted process that requires a deep understanding of the use case, a robust infrastructure, high-quality data and a solid data management policy. This is why, creating a collaborative environment is key for companies to successfully develop tools that people will actually use.?For example, a company developing an onboarding tool shall decide whether to keep the knowledge data within or outside its firewalls. This will have both engineering and data protection implications. Hence, a machine learning engineer and a data protection lawyer shall become part of the team from the beginning. Second, companies will need to understand how to design the tools, after all, it is the users, not expert programmers in machine learning, who will utilize them. The AI tool development team shall then comprise programmers well-versed in UX principles and product managers who comprehend the potential, as well as the limitations, of these tools. For example, when you look at video and audio to text apps out there, there are plenty. Yet they are either too general or they do not respect intellectual property or data protection provisions.?
Also, when dealing with powerful technologies like generative AI, there is a risk of solely focusing on the technology itself and its capabilities without observing how end-users will interact with it. The role of designers and front-end programmers who can create user-friendly applications will make the difference between powerful yet abandoned applications and limited but beloved ones.
Therefore, when a company decides to develop a tool, it will tap into an ecosystem of skills and players that will enable the creation of the tool. As we will see below, the tradeoff between how big the team should be and how much work should be outsourced will depend also on whether a company will choose to start big or small.?
Finally, the importance of focus cannot be overstated. This is why a resource covering strategy shall be part of the team. While models like ChatGPT have been trained and perform well for generic use cases, they require significant fine-tuning when it comes to solving more complex problems. The most successful tools will be those that focus on very specific and niche task.?
To summarise the ideal team will include a strategist, UX/product designer, a machine learning engineer, an IT guy and a data protection lawyer. Yet, if a company is small and the budget for the MVP tool is tight, the minimum team shall comprise a UX programmer, a machine learning engineer and a solid advice from a privacy lawyer. All of them shall work in an iterative process since all the aspects are interconnected.?
Invent, but do not reinvent the wheel
This leads to the second aspect of creating a tool. If it is true that merely accessing the APIs does not guarantee a company to create a good tool, even when using ChatGPT, then some “extra” work should be done. What is this “extra” that needs to be developed? This is not a black and white answer. In fact, often companies will find out that many solutions are already existing. Someone has already burnt the midnight oil for you. Hence, where shall a company concentrate??
First of all, we’ve seen that developing a generative AI model is so resource intensive that it is out of the question for all but the biggest and best-resourced companies. Companies looking to put generative AI to work have the option to either use generative AI out of the box (i.e. access to ChatGPT without any particular extra function), or fine-tune them to perform a specific task (i.e. by adding specific functions on top of ChatGPT). For example, if you need to prepare slides according to a specific style, you could ask the model to “learn” how headlines are normally written based on the data in the slides, then feed it slide data and ask it to write appropriate headlines.
Therefore, when creating a tool with ChatGPT, the starting point should be to identify a bottleneck in the company’s business. This could be obtained by mapping, for example, the daily most time-consuming activities. These should be written down so as the process involved in dealing with them. The question should also be asked to all he company members involved with such activities. This will give a good approximation of where a company should intervene.?
Once that has been figured out, the next step will be for a data scientist to create an environment where people in the company can literally play with ChatGPT’s APIs (i.e. an MVP). The steps that a human would take to perform a specific activity should be clearly documented. In fact, ChatGPT's capability lies in its generative and adaptable nature, so you can ask it to filter out irrelevant information or consider parameters that reflect your way of performing that activity.?
领英推荐
When doing the above it is very important to keep in mind that the tool will probably have two features. One is the extra part, meaning the part that the machine learning engineer and the team will develop from scratch for the tool (for example the function of filtering through a tender offer questionnaire and match it with the company’s several databases of information). The second will be the integration of what has already been invented with the tool. ChatGPT has already many built in functions and it will have many more. A successful MVP will adapt those functions and enhance them by integrating the generative AI model with something very specific and unique that the company is developing. Finally, when good results have been achieved, the user experience shall be added to allow colleagues to interact with the underlying AI model.??
Engineering the tool
After having decided what tool a company shall develop, what comprises a team and what needs to be invented to customize the product, a company shall look at engineering the tool.?
In our experience, when the CEO discusses integrating AI into the company's product, data scientists shall understand: a) the purpose of the tool (such as automating the onboarding process), b) what data are necessary (such as manuals, videos, company reports, existing courses), and c) the expected output (such as creating an onboarding coach for new employees). This investigation is crucial for data scientists as the outcome significantly impacts various metrics such as cost, time to market, and infrastructure. The importance of this analysis is amplified in AI-driven projects: let’s understand why.
AI-driven use cases normally fall into two categories: those that require training and those that do not. When we talk about training we mean using data from your specific scenario to teach to the AI model how and what it should predict. If the tool does not necessitate training, data scientists are likely using pre-trained models that can handle many use cases without re-training them on a specific use case. In the latter scenario, the engineering team will primarily focus on the IT infrastructure setup to make the model available for your processes and clients. Numerous use cases fall into this category, such as using a pre-trained model to classify the reviews of your customers or classifying whether activities on the company e-commerce suggest that the user is really interested in buying some products or not. However, when building a tool using ChatGPT, the engineering team will depend on its features and performance (which are improving over time) thus eliminating the need to set up and manage the infrastructure.
Conversely, if the company’s case involves training a model, the engineering team will decide whether it aligns with supervised learning, unsupervised learning, or reinforcement learning. While the latter is typically the domain of academic research, the former two are prevalent in industrial applications. Supervised learning involves algorithms learning from labeled data to make accurate predictions or classifications, such as customers churn prediction and sales forecasting, while unsupervised learning employs algorithms to analyze unlabeled data to uncover patterns, relationships, and hidden structures; often needed in many use cases, such as customer segmentation.
Indeed, a pivotal element in the training of models is data. As the model's knowledge source, data must be of high quality and sufficient quantity to match the complexity of your use case. This is where a data management policy and ethical values come into play. We will analyse this issue later in the article. As an example, if your business wants to identify the customers that are about to abandon your business, you need to share the data scientist data you own (specific of your business), where someone has hand-labeled what customers have left. In addition, the data scientist most likely will need the activity patterns of the users (e.g. how often, how much and what they bought) as these will help the model in modeling the complexity of the use case and identifying patterns and rules that influence the output. However, technical details are only part of the equation. The culture surrounding AI is equally important.
?
Small vs big
But to build a tool is also very important to recognize that, although AI carries a magical aura because, in certain areas, it can do more than what we humans can do and it can do it way faster, eventually AI is a tool driven by mathematics and statistics. This means that to develop a product a company should work in small sprints and constantly experimenting. This is why we often say that companies should play first with ChatGPT thus learning how it works and how to use it. In fact, very often data scientist shall choose between using the best, biggest and most promising model, which often requires a bit more engineering effort, or choose a smaller one. Getting used to GPT features will allow a company to understand how to approach the issue.?
Without oversimplifying, using smaller models often ensures the highest performance, provided that data supports it. Therefore, opting for simpler, more manageable solutions, which are easier to adjust and re-engineer, can mitigate the risks associated with re-engineering the AI solution. As an example, consider a small e-commerce business looking to implement a recommendation engine to suggest products to its customers. The engineering team can choose either to use a complex model to offer highly personalized recommendations, or to use a simpler approach from an existing model to develop a tool faster, easier to communicate, debug and explain to the product leaders, While the first is definitely more promising, the latter is associated with a lower risk because it’s faster to develop, it requires a lower investment (data required for the training, IT infrastructure, etc) and it’s easier to monitor and to tune to the business needs, making the second option the recommended path for the kick-off, while the first will be the enhancement of a working solution.
?
Risk and rewards
AI and, most especially, generative AI can certainly automate and augment a company’s business, but they can also put it at risk. As we have seen above, companies need data to create a tool a. When CEOs decide whether to develop a new tool, questions about data security are the most common such as: “if I use some of the company’s private data and share it on a large learning model, how do I know that my data does not feed the intelligence of an external model such as ChatGPT?” In fact, generative AI puts the risk topic on steroids, both for companies and for users.?Furthermore, when a company develops an AI based tool, eventually who owns the intellectual property (i.e. the right to permutation of data)? These are issue that are currently open and that we will try to explain below.
The AI Act – the legislative proposal introduced by the European Commission on April 21, 2021 – will be the closest thing that we have to a guideline on how to develop a responsible and safe AI tool. At the time of this writing the Act is waiting to be discussed and voted on by the EU Council and to finally become law – it is expected to be adopted by the end of 2023 or beginning of 2024.?
The AI Act will provide companies with a framework to help them develop, deploy, and use AI technologies that are safe, transparent, and trustworthy while upholding fundamental rights and values. The Act classifies AI systems into four risk levels based on their potential impact on health, safety, and fundamental rights. These risk levels determine the requirements and obligations imposed on the developers, providers, and users of AI systems. So there is a safeguard that will protect people from the development of risky tools.
However, the Act does not sit well with generative AI because it was conceived before the arrival of ChatGPT. According to the current logic of the AI Act, the categorisation of an AI system as high or no risk depends on the purpose of use that the provider envisages, meaning what OpenAI wants to do with ChatGPT. lf systems that are intended to be used in one of the areas specified in annex III of the regulation are considered high risk and shall be banned. However, in all other situations, AI systems fall under the no-risk category.?
The problem of developing tools using generative AI is that it is not OpenAI, but rather the company using ChatGPT that determines how it will use the tool and whether it falls into the low or high-risk category. In other words, a company might use ChatGPT to do things that can put people at risk. Hence, some of the risks for users and employees will result from the way companies use these systems. For example, when developing an onboarding tool, a company might use the tool to also grade the candidate. What if the tool “hallucinates” and the resource is wrongfully judged? What if the resource is still on probation? Shall the tool have a “human in the loop” element and double check the grades or simply not be allowed to grade the candidate in the first place? OpenAI and ChatGPT will not have anything to do with this decision, but the AI Act does not state obligations for companies that build tools on Generative AI.?
This means that companies developing tools based on generative AI shall ultimately decide how responsible they want to be and how safe their tools should be. The AI act will not require them to be accountable. The responsibility and safety will depend on the company’s own ethics. This is why it is very important that AI development teams must comprise data protection lawyers to navigate an array of issues that are still in a gray area.?
Compliance: yes or no?
Having said this, what are the main issues involved in developing an AI tool? The first one is related to data. Companies should understand what data should and should not be put into the model, especially when identifiable information is involved. Each model is as good as the data that is fed to. So, choosing what data and ensuring that the model respects the user's privacy rights are the first steps of a legal compliance strategy.?
This is particularly true when choosing where the model sits. As we have seen above, if it is outside of the firewalls, data will be transferred outside the company. If the model will be within the firewall, data will be kept inside. Each type carries a different data and IP grade of risk. Again, a privacy professional and a machine learning engineer shall work closely so as the ecosystem of partners the company is working with
The second major legal aspect will be related to Intellectual property. When a company develops a tool based on a generative AI, eventually who owns the right to permutation of data (i.e. the intellectual property)?
These are some of the issues that a company shall face when working on a compliance strategy. It is important to understand that the compliance strategy will strongly influence the business model and the machine learning model. For example, if a company handles sensitive data it might choose to keep the model inside its firewalls and start from a small product rather than big. Furthermore, to deal with safety it might choose to anonymize the data subjects. All these decisions will have to be coordinated between legal, business and IT teams.?
Finally, the most important aspect to consider when developing a responsible and safe tool is why shall a company comply? As we have seen before, AI is still in a gray area. Yet, often it is in gray areas that opportunities are built. This is where the famous Silicon Valley quote resonates with the companies: “ask for forgiveness but do not ask for a permit.” Or also what famously Mark Zuckeberg has been reported saying: “break things and make some noise.” However, what does it mean to break things when you develop an AI tool? Is it worthy, and safe to do so?
Compliance is often regarded as a leash that inhibits innovation, but we believe that the key question is ultimately: “how to be compliant and yet innovative?”
When advising companies on how to comply with laws you often hear:” why do we need to comply when others are not? At the end of the day we are all experimenting with something new.” We often face situations where it is difficult to say what is right and what is wrong and where a wrong advice might strongly affect the business model of a new tool. So why do companies need to comply and what does it mean for innovation? Or better: ”when many in gray areas prefer to cut corners, why shall I comply when it is fuzzy in the first place?” How far should a company push the envelope? When looking at AI tools in the market, since we are at such an early stage, it is not rare to encounter companies that, even without knowing it, are violating both IP rights and privacy. Some of these companies are even funded.?
Our opinion is that in the end being compliant is an act of responsibility in itself. Compliance is not only useful to provide safety and to act accordingly, but it is also a strategic tool. Whenever a company shall look to build partnerships or bring in investors, eventually it will be asked whether what it does is compliant and sustainable. And talking about AI this will become an even more pressing question and eventually a deal breaker in the near future to come. AI has a tendency of moving very fast and a well thought and engineered AI tool, even from a compliance point of view, will stand more chances of surviving, in the long run against those who are not structured accordingly. One example is what is happening to Open AI. Not long after the Italian authorities blocked its use in Italy, other countries moved in to question it. The reason? Privacy and intellectual property issues.?
Hence, we believe there are two ways to develop an innovative tool. One can stop and see when the answer will be figured out and it might be too late. The other direction, the one we have experimented with, is to assemble multidisciplinary teams with HR, technology, legal, risks and business experts and decide what is the right thing to do considering what we know today, how do we improve and learn from our mistakes?
What is certain is that developing generative AI tools is an iterative process that will eventually go through the three areas we have touched upon in this article. Mistakes can and will happen. There will be plenty of uncomfortable questions and answers. At the end AI is by definition a probabilistic tool, an inexact science. Not only models will need to be trained, but also organizations will need to go through the same “training” and refine their tools and governance to ensure that they are learning from their mistakes and that what they do shall remain sustainable.?