Beyond the hype: AI and its impact in companies’ productivity
Picture Generated by AI (Image Creator Microsoft)

Beyond the hype: AI and its impact in companies’ productivity


AI and its impact in companies’ productivity

One of the main landmarks every January is the World Economic Forum meeting in Davos: the quintessential place to network and speak about future trends. Social networks were flooded with pictures of the who’s who of global leaders standing in the middle of the snowy streets (the one that illustrates this article is AI generated) or speaking in crowded tiny meeting rooms.

Artificial intelligence was the main topic this year. It was not a surprise. Although many AI applications and services have been launched since 2005, it is the Generative AI hype and the possibility to interact directly with these models through Apps in platforms like Chat GPT what have increased its popularity exponentially. Investors and large Tech companies have been plowing huge amounts of money in its development.

I had the chance to follow the discussion between Sam Altman (OpenAI) CEO and Satya Nadella (Microsoft CEO) (see link ). Not many specific details about future developments were revealed. It was an interesting discussion with a stark contrast between an experienced and seasoned CEO focused on commercial applications and a utopian one passionate about his product and aiming to win the quest for AGI (Artificial General Intelligence) and ‘reasoning’ (despite there was no consensus on what AGI means).

Regarding specific developments that OpenAI may bring very soon, Nadella pointed out general improvements in the multi modal (text, image, videos, code) approach and the possibility to generate code automatically. His vision is that this can enhance the productivity of the companies as the traditional boundaries between product managers, front- and back-end engineers, DBAs, UI /UX designers, DevOps engineers and architects will blur.

It’s not only about coding: they forecast further developments in the way we interact with computers and formats like images, sound, languages different from English and, about all, both were enthusiastic about new capabilities from GPT (Generative pre-trained transformer) systems becoming smarter and smarter. Nadella (he is indeed a very good salesman) compared the upcoming AI to other IT key milestones like Web Server Architecture or Cloud computing.

In the meantime, during this WEF meeting, AI doomsters also had their say about the future of AI at the workplace. The comment that resonated more is one from the IMF (International Monetary Fund) claiming that 40% of the current jobs will be impacted by the AI (see link ).

Altman’s point of view is more aligned with Schumpeter’s ‘creative destruction’ concept: some positions will be impacted but new opportunities will arise. At the moment, human intervention is needed in specific tasks in the AI process (prompting, supervision…) but it is clear that clerk, knowledge and creative workers will suffer most in this transformation.

Nevertheless, there are still many questions pending regarding AI and how the whole AI ecosystem may evolve. Some topics like the debate Open vs Close source and the concentration of power in platforms were discussed. In the following points I review some of these topics but before let’s have an overview of the AI value chain.


AI value chain

It is good to have an overview of the whole AI ecosystem before big questions about AI are tackled. There are 6 top-level categories in the generative AI ecosystem. Upstream there are Computer Hardware and Cloud Platforms that allow access to accelerator chips optimized for training and running the models. In this area there are not many competitors. Most of the specialized chips that are key for the clusters of GPUs (Graphic Processing Units) and TSUs (Tensor Procession Units) that process the huge amount of data needed to train Foundational Models are produced in Taiwan by TSMC and designed by companies like NVIDIA or Google. Regarding Cloud Platforms, most of the market is in hands of AWS, Azure, Google Cloud and Alibaba Cloud. Accelerator chips are currently a bottleneck and most of the main AI players (Meta, Google, OpenAI/Microsoft) are in a race to hoard up this Computer Hardware.

At the core of the stream there are AI Foundation Models that train the massive amount of data (text, images, audio and video) that allow LLMs (Large Language Models) to generate content. The first foundation models used sources like Wikipedia, Encyclopedia Britannica, image suppliers like Shutterstock and others (for instance scrapping the web) to train their models that allow predicting data with accuracy.

They are based on neural networks where the weights of the different variables are adjusted iteratively to increase the accuracy of the whole model. Data must be prepared, trained in the selected architecture and fine tuned iteratively. These iterations (epochs) imply that a huge amount of data must be processed (the more data and iterations, if properly adjusted, the more accuracy in their models). The main players include mainly large IT companies (Microsoft/OpenAI, Google, Meta and Amazon) and some new entrants (Anthropic, Mistral, Cohere and AI21).

A new trend regarding Foundation Models are SLMs (Small Language Models). These models? enable further control, are easy to run and fine-tune, have a lower latency and an acceptable accuracy for some specific usual tasks like summarization or classification. Some available are Llama-2-13b and CodeLlama-7b from Meta, Mistral-7b and Mixtral 8x7b from Mistral, Phi-2 and Orca-2 from Microsoft, DistilBERT from Hugging Face, Google Nano Gemini, TinyBERT and T5-Small. They are mostly for limited use cases but they can be run on-premises /on-device deployments which make them ideal for specific applications.

Down in the value chain we can find Model Hubs and MLOps that are the tools, technologies and practices that enable the adaptation and deployment of end-users’ applications. These applications are based on the pretrained information of the Foundation Models so their development fine tuning this information to adapt it to a specific usage. Normally, Foundation Models suppliers enable access these tools and technologies through Apps (close source) but there are also open source players (like Hugging Face) that enable access to their models and MLOps capabilities so that companies can fine tune these models with their own proprietary data.?

Finally, downstream we can find Applications and Services. Some players use Foundation Models to develop specific applications. Some examples are content writing, chatbots, analysis and synthesis for text, code generation, image and video editing, translations, text to voice and voice to text translations to name a few ones.

Another players use existing Foundation Models with specific data to deploy solutions for an specific industry (healthcare, manufacturing, education,…). Normally they train the models to produce a solution adapted to an industry (for instance image analysis in healthcare to identify a tumor see link ).

One last trend is the emergence of LAMs (Large Action Models) still at its early stages. This models not only allow the generation of content, but they also perform the subsequent actions. For instance, if want to fly somewhere, it finds the most suitable option based on my preferences, accesses my credit card, buys the ticket in the airline and sends me the boarding card.? One example is Rabbit (rabbit r1) (see link ). Google is also working in similar models.


Big questions around Generative AI

1. Open-source vs close source.

Algorithm transparency has always been a controversial topic. Consider the debate about social media and how to find a proper balance between the rights of the company that is owner of the code and has deployed the code and the social scrutiny needed to understand its impact on people behavior.

In the case of AI the debate goes beyond: code can be used also for other malicious uses. This has been a recurrent line of reasoning for some companies to avoid providing further details about how their Foundation Models have been trained. It sparked the huge controversy at OpenAI board last November. Here you can check Ilya Sutskever’s (one of the OpenAI founders) point of view (April 2023).

2. Platforms

Foundation Models require a huge amount of processing capabilities, research and data availability. It is only large IT players who have access to these resources. If you need to access videos to train your models needless to say that YouTube (Google) or TikTok have a vantage point. If you need text or conversations, WhatsApp (Meta) or Microsoft is probably a good source that can be used. As we could see in the previous point, they also have the processing capabilities in the Cloud and a privileged access to chip manufacturers.

This leads us to the power that some corporations may amass if there is not enough competition to facilitate access to new companies in the AI ecosystem. Access to information (some of it quite sensitive) that they may have regarding users, data, industries give them a privileged position.

Just an illustrative example: last month the US Government used one of its more powerful supercomputers called Frontier to train large scale language models. Despite its huge processing capabilities, the US Government acknowledged the difficulties to train LLMs. It looks like private companies will prevail, at least in the short term.

3. Copyright

This is one of the more controversial topics regarding AI. Most of the text, images, videos, and content that is used to train Foundation Models may have been obtained from scrapping the Internet. Quite recently, The New York Times sued OpenAI and Microsoft for copyright infringement. According to the newspaper OpenAI used content restricted by its paywall. The case is open and producing evidence is not an easy task. Many others may follow suit (see link ).

4. Processing bottlenecks to train AI models

As explained previously, NVIDIA has a 80% market share of AI-capable GPUs (Graphic Processing Units) needed in Generative AI. TSMC manufactures most of these chips. Although the main platforms are trying to find alternatives in other companies like AMD, his situation is not easy to change in the short term.

5. Biases / Cultural awareness in the models.

Initially most of the AI Foundation Models were trained mostly by data gathered in English and in Western Countries, therefore they reflect the biases already present in the texts, images, videos, and audios that trained these models. Note that the whole internet and social media text, images, videos is being used to train to feed these AI platforms.

As new languages have been incorporated to the Foundation Models new versions of the most common platforms have been able to reduce these biases. However, some languages have a very limited digital footprint (data that is used to train the models). There are still unacceptable results in some generated text, videos, or images.

Another unresolved challenge for the future is what will happen when the AI Foundation models will be trained on data already generated by AI. This is not the case now, but it will be for sure in the future. Some computer scientists are already proposing solutions to distinguish content generated by humans from that generated by Generative AI.

6. Cybersecurity and other malicious uses.

There is a growing concert that generative AI can be used to increase cyber threats as the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding will be easier and cheaper.

Vulnerabilities are also increased: introducing AI in the software stack can lead to implicit and explicit access to data, model parameters, or resulting outputs, increasing the overall attack chances of the whole system.

The case of explicit AI-generated images of Taylor Swift published in social media is a clear example of malicious use (and lack of content moderation in social media) that has even made the US Congress to propose a new bipartisan bill: AI Fraud Act (see link ).

Large AI companies are trying to put some ‘guardrails’ in place to mitigate biases and malicious uses. However, these ‘guardrails’ are quite ineffective: Machine Learning technology and transformer architecture used in the models make very difficult to remove content already used to train them.

7. Hallucinations. Supervision / Content moderation.

Although the accuracy of AI Foundation models and its subsequent Applications has been increased through further training, every Generative AI system produces errors or false information: these are called Hallucinations. One golden rule when using these systems is always review or supervise the outcome. This is a gap that is still unsolved by the main AI players.

How will the future look like? There are basically 2 main trends. Some experts like Altman keep on searching for Artificial General Intelligence. This is the stated mission of companies like OpenAI.

Another trend is forecasting a more pragmatic approach training specialized datasets, with no need of further scaling data training and specific applications for companies and industries. This is more aligned with the SLMs commented previously (see link ). Here you have also the point of view of Ilya Sustkever (Chief Scientist of OpenAI).

These approaches, especially when it comes to regulation and ethics, have some critical voices among eminent Computer Scientist like Margaret Mitchell (Hugging Face), Abeba Birhane (Mozilla), Timnit Gebru (Ex-Google AI), Fei-Fei Li (Stanford) or Geoffrey Hinton (University of Toronto, ex-Google).

I highly recommend watching this video from Geoffrey Hinton, nicknamed ‘Godfather of AI’ (it is an excellent summary of how AI works and its challenges in the near future) and this recent lecture of professor Michael Wooldridge from the Alan Turing Institute.


New regulations. European Union AI Act.

In this complex environment the European Union has been discussing a draft of an AI regulation: the AI Act. It is a bold initiative led by MEPs Dragos Tudorache and Brando Benifei as technology is still evolving and changing. The declared objective of the law is to foster a Human-centered AI, trying to put in place this regulation without hindering the development of AI in the EU territory. I recommend this debate in Stanford where one of the MEP working in the Act (Dragos Tudorache) explains the difficulties to find this balance (link ).

The Act tackles some of AI most controversial topics like Generative AI, bio-metric surveillance, and other High-Risk activities, and it is very specific when it comes to governance and enforcement. It defines different key concepts that apply to AI, like Artificial General Intelligence, development standards, auditing mechanisms, documentation, open and closed source models.

The AI Act creates a specific AI Agency to supervise its enforcement (supported by national ones). Among other topics, this Agency will be responsible for scoring Foundation Models and controlling documentation and platforms transparency. In this link you can access the Draft and Summary of the AI Act.

Main concepts

  • The AI Act follows the OECD nomenclature and definitions regarding AI.
  • Applies to companies outside the EU.
  • Set up of a risk-based system (link to estimated model of the AI Act risk assessment system): Prohibited AI, High-Risk AI, Limited Risk AI, Minimal Risk AI.
  • Specific requirements for Generative AI, General Purpose AI and Foundation Models regarding transparency and disclosure. Generated content should be labelled /detectable, and the user interacting should be informed beforehand (for instance chat boxes).
  • Grace period between 6 and 24 months.
  • Complaints can be submitted by any individual.
  • Exemptions for some activities: National security, defense, and military. Partially for Open-Source applications. Research and Development. The Act only applies once the AI system has been: Placed in the market. Put into service.


Activities labelled as Prohibited and High Risk.


?

Specific requirements for High-Risk Activities

  • Should be registered in a public EU database.
  • Transparency in its use and documentation.
  • Must have human supervision.
  • Data governance mechanisms in place (representative training data, no biases…).
  • Implement risk and quality management systems.
  • Accuracy, cyber security, and robustness assessments.
  • Fundamental rights assessment.

The general principle within the EU legislation is: if is illegal in the real world should also be illegal in the digital one. No one can deny that there is a huge effort behind this Act led by the MEPs and their teams to produce this legal text.

From my point of view, there are some challenging topics in the AI Act:

  • Enforcement. Although an AI Agency will be created and it will be supported by national agencies, it remains to be seen how effective they will be monitoring large Platforms and companies with a global scope. There are also some vague concepts in the law: for instance, where are the limits between behavioral manipulation or influence?
  • Impact in third countries legislation. At least a transatlantic agreement should be desirable. Previous Acts like GDPR could set a global reference. It is still not clear in the case of the AI act.
  • Hindrance for entrepreneurship? Some voices in Europe like Mistral CEO have been very critical with the Act (see link ). The incentives, investments and sandboxes described in the law have still to materialize. R&D is exempted but as explained before AI Foundation Models require large investments and it is only when they are launched that they can monetize it. Consequently, some large platforms may decide to launch their models, apps and services first out of the EU as it has been the case with the recent release of Gemini (first launched in the US, Japan and Korea).

Beyond the current hype and despite the risks that Generative AI may bring, AI has become an essential tool to solve real world problems. ?


How to apply AI to my business in a simple and practical way.

There is a lot of debate about how AI will impact work and business. Some people think that AI will take by storm current IT systems and jobs. The popularity of some NLP (Natural Language) tools launched by large technology groups like ChapGPT-4 and Gemini backs this vision. However, there is an alternative vision that supports an incremental implementation within the actual context where software developments cycles are much shorter than in the past.

MLOps (Machine Learning Operation) tools and Model Hubs now enable the use of AI embedded in current IT developments to perform a task. AI Generative tools allow that the transformation, that traditionally has taking place in manufacturing and operations, now can be extended to typical white-collar tasks.

According to the IFI Claims Patent services records these are the areas where most of the AI patents have been developed in the last 5 years. This information can give us a view of how AI is being applied.


?

Out of the large number of patents linked to computing arrangements based on biological models (applied mainly in healthcare), most of the developments had to do with pattern recognition, image /video /speech / language data analysis and machine learning.

Here you have some suggestions that can be easily applied in many companies:

Marketing & Sales.

  • Create personalized content and posts to clients.
  • Create clusters of customers.
  • Detect trends (customer attrition, cross selling opportunities)
  • Powering dynamic pricing tools.
  • Use speech recognition to analyze customer requests, generate content or report information from the field.

Customer Service.

  • Use of chat box to guide customers inquiries. There are many apps that do not require computing skills that allow companies to feed proprietary data and content in these processes to solve swiftly the specific customer request.

IT.

  • Generating new code and debugging existing one.
  • Adapt existing code to new programming languages.
  • Automatize some recurrent IT infrastructure tasks.
  • Adapt capabilities and interfaces to customer needs.
  • Detect cybersecurity risks.

Operations and Service.

  • Image recognition in manufacturing processes.
  • Streamline recurrent administrative processes (invoicing, complaints, information requests…)
  • Summarize legal texts.
  • Exploit location services to reallocate shipments.
  • Create personalized content to train staff.

When it comes to business, I find it particularly interesting to apply these techniques to companies’ proprietary data. AI is a logical next step for companies who were able to adapt their IT architecture to exploit their own data. The traditional role of Data Analysts is now supercharged applying AI to detect new insights and trends. Machine learning applied to data enables adapted customer experience and content and much more flexibility in our computing and development capabilities.

Regarding Generative AI, a very common useful development design pattern is Retrieval Augmented Generation (RAG) where you can combine LLMs like Chat GPT or Gemini with your own knowledge data base. Here you can find a scheme of how it works:

Scheme published by

My advice is always to experiment internally with a specific project embedding AI in your existing developments to gather or produce data and content more efficiently. Costs are affordable (depending on the amount of data and the accuracy needed it may vary) and it is worth the investment in recurring and repetitive processes. It is easier to fine tune one of your existing processes rather than starting a project from scratch.

Despite the existing hype regarding AI, there are many practical and easy to implement applications and developments that can exponentially improve your processes. The objective is not to get rid of staff (in fact it is recommended human supervision in generative AI processes) but to reallocate our associates where they can add more value to our customers. AI technology will become a commodity and its value will depend on how you can adapt it to your company. In the end it is your people skills (talent), your proprietary data & knowledge about your industry & customers and the efficient implementation of your strategy what will make the difference for your business success.

Juan Carlos Sánchez Rodríguez

CEO | Director General | AdelantTa, Selección, Formación y Consultoría | Recursos Humanos, Marketing, Ventas

9 个月

Totalmente de acuerdo con tu mensaje, Francisco Rivero Hay que acotar muy bien que herramientas son útiles a cada organización y cada posición, al mismo tiempo que entender la IA como un complemento que permite mejorar la productividad, supervisado adecuadamente. Tengo ganas de tener una charla contigo sobre este tema. Estoy seguro de que tus aportaciones y tu visión estratégica de este asunto es muy valiosa.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了