AI's Role Revolution: But What Exactly Are We Changing

AI's Role Revolution: But What Exactly Are We Changing

Among the topics frequently discussed regarding Artificial Intelligence in the professional sphere, there's certainly the question of how the roles and responsibilities of company personnel should be redefined.

Equally often, however, articles and interviews on the subject tend to take a wait-and-see approach, primarily aiming to observe how real-world scenarios unfold before taking action accordingly.

This approach is understandable for two main reasons:

  • We are still in the early stages of AI adoption, which is potentially disruptive but still maturing,
  • It is reasonable to assume that the technological elements impacting people's work methods are not always fully understood or known outside the Digital area of an organization, thus hindering a clear vision of the future workforce.

However, rigidly adopting the wait-and-see stance could prevent companies from acting before others in becoming a true "AI-driven organization", through targeted interventions directed at personnel, yielding operational and reputational benefits.

Therefore, while acknowledging the two points mentioned above as objective conditions, it is still worth making the effort to develop reasoning to address the issue of role and responsibility redefinition in the AI era.

?To do so, breaking down the term “Artificial Intelligence” into more specific elements can simplify the problem and facilitate understanding. This is because, when referring to Artificial Intelligence today, it often implies Generative AI, a subset of AI. More specifically, it typically refers to “general assistants” like Google Gemini or ChatGPT, a very particular category of Gen.AI systems. But these are only a fraction of the many types of AI.

?

Dedicated AI Algorithms - Low/Null Impacts

This applies to algorithms (e.g., Machine Learning) aimed at replacing others of a different nature within end-user applications (e.g., for predictive maintenance or for risk analysis). In these situations, the impact on the employee is essentially negligible, as it is typically transparent to them whether a certain output provided by a system has been produced by an AI algorithm or not. There may certainly be an improvement in the performance of the supported processes (e.g., reduced incidents), but from the operator's perspective, their role and skills remain essentially unchanged.

?

Process Automation through AI - Predictable Impacts

The use of AI systems in process automation is a well-known and long-standing topic, similar to the one just discussed. In this case, AI systems are employed to replace the operational/manual activity of a person in performing a specific task.

In scenarios of this nature, the impact on roles and responsibilities is largely predictable and, in fact, technology-agnostic (i.e., whether the automation is AI-supported or not).

Simply put, human intervention will no longer be necessary in carrying out a particular task, and the organization's objective will be to understand how best to redeploy the freed workforce.

Typical examples include reallocating at least some of these personnel to supervise and certify the quality of the result offered by the automation, accompanied by dedicated training for the new task.

Identifying the number of people replaced, and potentially those taking on a different role within the same process, is typically an activity carried out within the business cases supporting investment in automation. It is therefore a well-established practice: the underlying technology may change, but the considerations regarding impacted people's roles and responsibilities remain the same.

?

Generative AI Applied to Specific Use Cases – Partially Predictable Impacts

Among the most common use cases implemented by companies across various industrial sectors (e.g., tech, consulting, automotive – see links 1, 2 and 3 below for examples), the use of Gen.AI-based chatbots for accessing proprietary documentation is notable. In simple terms, these are ChatGPT-like interfaces allowing conversational interactions to quickly retrieve information such as internal processes, policies, codes of conduct, product details, etc.

But it doesn’t stop there. Gen.AI-based systems aimed at maximizing productivity in highly specific areas, such as software development (see link 4) or CRM (see link 5 below), are increasingly being adopted.

The introduction of such systems only partially reveals the implications for the roles and responsibilities of the impacted individuals, and consequently, only part of the overall HR actions that would be needed.

Firstly, let's begin by reasonably assuming the following considerations:

  • The use of these tools can significantly increase staff efficiency in performing certain tasks (e.g., even more than 50% faster in writing specific software code – see link 6 below), thereby boosting productivity and necessitating consequent organizational interventions (e.g., the decision of whether to deliver equal work volumes with reduced staff or increase tasks assigned to each developer),
  • Interacting with conversational tools requires new skills, such as semanticization and (re)ontologization, necessitating targeted interventions for their improvement, as detailed in one of my previous article (see link 7 below).

In addition to these, precisely because we're dealing with Gen.AI tools focused on specific domains, we can also assert that the impacts on individuals, and thus the consequent people-support plans, are highly dependent on the application context and not always entirely understandable in advance. This is especially true if such tools are used by personnel with limited digital literacy.

Therefore, to better understand the impacts on roles and responsibilities, on-site observation becomes necessary. Alongside technical training plans for using these types of tools, it is therefore indispensable to carefully evaluate how people naturally tend to approach these tools, as well as how attitudes and skills change over time for those using them.

This kind of intervention is certainly important because it cannot be assumed that all individuals involved in using these systems perceive their interaction with them in the same way, both because the evolution of personal skills and attitudes, as well as deriving from personal inclinations, can also be influenced by the corporate culture (e.g., greater emphasis on efficiency vs. greater emphasis on productivity increase).

From this, it can also be deduced that reusing experiences in a similar context but in a different company may not provide a perfect reference point, as it lacks the variables specific to that work environment.

?

“Generalist” Generative AI - Poorly Predictable Impacts

The last category discussed is that of Generative AI tools that aim to provide general support for daily productivity, such as ChatGPT, Google Gemini, or their integration into more common work tools (e.g., Copilot for Microsoft Office). It is perhaps these tools that are referred to when discussing with the general public the impacts AI will bring to the world of work, individuals, or society at large today.

I have already described in detail the initial effects on corporate personnel due to AI (and, in fact, specifically to “generalist” Gen.AI) in a previous article (see link 8 below), within two declared conditions:

  • Restricting the redefinition to the manager's role, envisioning less of a "task master" and more of a networker and entrepreneur,
  • Assuming that the entire workforce has access to and a high level of proficiency with these considered tools (which, presumably, may indeed be the case in a few years).

However, if one were to discuss broader terms in regard to corporate roles in the shorter term, making predictions becomes more complicated for a very simple reason: these tools are still in their infancy, and predicting their evolutionary direction, which currently appears frenetic, is not easy.

Furthermore, it is even difficult to find a precise definition for them and, consequently, their clear utility.

In an intriguing TED Talk (see link 9 below), Mustafa Suleyman, CEO of Microsoft AI, referred to them as "[...] digital companions, new partners in the journeys of all our lives, that enable everybody to prepare for and shape what comes next".

It is precisely through this statement, made by such an authoritative figure, that it becomes apparent why anticipating the impacts of these tools on a company's workforce is challenging: if the definition of these objects is vague, it is evident that precisely qualifying their immediate effects on people is not possible (hence the more cautious attitude expressed earlier).

In support of this consideration, one can also refer to a recent Gartner "Hype Cycle", which places Generative AI at the highest point of the "Peak of Inflated Expectations" (see link 10 below), meaning that, unless other technological revolutions (or presumed ones) occur immediately, in the coming months, expectations towards them could be downsized, with all the ensuing consequences, including impacts on people.

However, while we cannot precisely predict the impact of this new systems on individuals, we shouldn't remain passive: companies are instead encouraged to experiment with the technology to gather real-world data and use the evidences to build effective people-related plans.

For example, by circumscribing pilot projects to appropriately selected personnel (see link 11 below), primarily aimed at testing the usefulness of the technology, it might be possible to introduce evaluation elements on how skills and, potentially, even the roles of people interacting with this form of AI evolve.

This could be done simply through questionnaires administered during such experimentation, aimed at intercepting sought-after elements, such as whether through those tools, people:

  • Feel more autonomous, productive, and creative, and how this manifests concretely in actions, potentially leading to the creation of new roles,
  • Manage tasks that previously required more organizational levels (e.g., their own plus parts of those above and below), which could lead, for example, to organizational redefinition or simplification,
  • Have identified additional useful activity that could be performed and were previously not considered, from which innovations in processes or products could be hypothesized.

More specific questions can obviously be derived from these or extend the exemplified list provided, but the concept is simple: the more "generalist" and immature AI tools are, the more impractical it becomes to predict new roles and responsibilities a priori.

Pairing technological progress with a "field" assessment of attitude, roles, and responsibilities change appears, therefore, as the most balanced approach to prepare organizational interventions that are progressive and evidence-based, rather than being hasty and risking losing alignment between the workforce's skills and the tools it will have at its disposal.


I leave here below the links to some additional article that you may find interesting:

  1. Mark Zuckerberg tries to reassure employees that Facebook’s parent has an A.I. strategy after cutting 10,000 jobs
  2. Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration
  3. Audi revolutionizes internal documentation with RAG-based AI chatbot
  4. The world’s most widely adopted AI developer tool
  5. Salesforce Einstein Copilot brings new reasoning and actions to enterprise generative AI
  6. Best of 2023: Measuring GitHub Copilot’s Impact on Engineering Productivity
  7. Mastering Semanticization and (Re)Ontologization Skills: The Key to Excelling with AI Interaction and Reversing Cultural Decline
  8. The role of the manager in the age of AI
  9. What Is an AI Anyway? | Mustafa Suleyman | TED
  10. What’s New in Artificial Intelligence from the 2023 Gartner Hype Cycle
  11. Beyond Generative AI Hype: Avoid the Bubble Through Selective People Engagement

J?rn Sch?neich

HR macht Spa? - am liebsten im digitalen Umfeld.

10 个月

Grateful to Maurzio for the AI insights. For me in HR, it’s vital to identify skills that support our business in the AI landscape. Through pilot projects and data analysis, we’re pinpointing evolving skills and emerging roles shaped by AI. Such hands-on assessments could ensure our HR strategies advance in step with our team’s needs, harmonizing tech advancements with shifts in workplace dynamics.

要查看或添加评论,请登录

Maurizio Marcon的更多文章

社区洞察

其他会员也浏览了