AI Frontrunners to Bridge the Gap Between Fast AI and Slow Regulation
Maurizio Marcon
Strategy Lead at Analytics and AI Products | Group Data & Intelligence
The rapid pace of technological progress is evident and has been for many years. A known "mathematical" formulation of this is Moore's Law (see link 1 below), which predicted in the 1970s that the number of transistors within integrated circuits would double every two years. This law has stood the test of time, still applying fairly well today, and, in practical terms, indicates that the entire digital industry has enjoyed exponential evolution over the years. Consider, for instance, the speed at which the cost of processors has reduced, the speed of the increase in memory capacity in PCs and laptops, and even the increase in number of pixels in digital cameras. All these elements are closely tied to Moore's Law and have led to not only technological advancements but also social changes, productivity improvements, and direct impacts on global economic growth.
Thanks to all this, the applications of Artificial Intelligence have become increasingly impressive in recent years. Only recently, in fact, after years of evolution, has the necessary technology become available at an affordable price to process vast amounts of data in reasonable timeframes. Similarly, the recent hype about Generative AI (e.g., ChatGPT) is ultimately the result of years of technological and scientific progress. A few months ago, these advances produced results of extraordinary interest, but they certainly did not come out of nowhere.
Today, a long-standing issue that is becoming increasingly urgent is the regulation of technological progress. Governments and legislative bodies have operational times that are incompatible with the speed at which innovations are now made available on the market. This can lead to regulatory gaps that can be exploited, creating problems for society and the economy.
Recent examples revolve around the nature of contracts for so-called Gig Workers, for instance in food delivery chains or ride-hailing services (see link 2 below). Initially viewed as autonomous collaborators or freelancers, often underpaid and with little to no protections, they began to demand rights such as a minimum wage (see link 3 below), eventually being recognized as full-fledged employees of the very companies they worked for (see link 4 below).
Another glaring example (the proverbial elephant in the room) is the concentration of entire businesses in the hands of very few, yet immensely powerful players. Notable cases include Meta for Social Networking, Alphabet (Google) for search, and Amazon for e-commerce. These giants took advantage of regulatory loopholes to grow exponentially and secure a position of substantial monopoly (see link 5 below), undermining competitiveness and, ultimately, disadvantaging the end consumers and people at large.
Similarly, technology regulation, with its legal, risk, and compliance implications, faces the same issues within companies.
Entire departments dedicated to these aspects, in businesses of any nature, are under tremendous pressure nowadays. Companies are eager to leverage the possibilities offered by recent innovations, which are occurring at an incredible pace, but wish to do so while adhering to regulations to avoid penalties and reputational damages.
This issue is particularly relevant to the advancements in AI and Generative AI: CEOs are keen on accelerating their adoption to capitalize on the opportunities they present as soon as possible. However, the departments handling, simplistically, regulations and controls, are overwhelmed with work and often can't respond quickly enough to the changes and novelties.
Adding to this is a rather subtle aspect that has also evolved rapidly: simplifying and exaggerating a bit, Artificial Intelligence is already no longer a competitive factor. It's a survival factor. As we witness a democratization of AI, those employing it are "simply" adopting the same technologies used by other players: it is those not using it who will be swiftly penalized in and by the market.
Consequently, the real competitive factor today is managing to employ these technologies, as soon as possible and in a compliant manner, to reap the benefits they offer before others do.
And this clearly is a cultural and organizational challenge.
For instance, a typical and correct approach regarding (e.g.) regulatory compliance functions is to want to regulate everything upfront. This is essentially a necessity, especially in heavily regulated businesses where there could be risks of hefty sanctions, hence, minimizing risks is a corporate mandate.
However, this approach clashes with a world where almost daily a new technology emerges offering functionalities that may not fit perfectly within the current regulatory framework.
And if the posture is to wait for adoption until there's clarity from an external regulator, to then reactively understand how to adjust company policies and processes, months of not utilizing something potentially valuable could be risked, despite physically possessing it.
领英推荐
On the other hand, adoption without paying attention to possible risks is clearly ill-advised. So, what could be done?
To resolve this dilemma, an intermediate approach could be hypothesized, which allows for upfront regulation, but “just in time”, through the introduction of a new role: the “AI Frontrunner”.
This individual should be both an expert in the AI and Gen. AI market, and knowledgeable about the specific needs of their own company. They should maintain regular interactions with both significant vendors and their reference businesses, to initiate qualified discussions with control functions (e.g., Legal, Compliance, Risk) as soon as possible, focusing on upcoming innovations that could actually be relevant.
In this way, such functions would be pre-alerted and could, ex-ante, begin setting up the necessary frameworks to manage a not-yet-released technology that, when it will be, it can then be adopted immediately because the requirements for risk analysis and management would have already been addressed.
Let's consider a simple example. Suppose a chatbot powered by Gen. AI (e.g., ChatGPT, Google Bard, etc.) is securely and compliantly deployed within a company. Assume that as a future feature offered by the chatbot provider, there is a text-to-image capability, namely the ability to automatically generate images through a simple conversational text request. With traditional processes (i.e., ex-post regulation), at the time this feature becomes market available, various control departments would engage in discussions on how to regulate relevant issues, such as intellectual property of the generated image. After weeks or months of discussion, they would finally establish guidelines to follow. This, evidently, would cause adoption delays, potentially seeing competitors benefit from it sooner.
On the other hand, if an “AI Frontrunner,” staying informed about the market, had been aware of the provider's plan, discussions with the relevant functions could have occurred well in advance. By the release date of text-to-image, the technical teams and business counterparts could swiftly and frictionlessly employ this technology, as the new processes and company policies would have been issued “just in time,” having been prepared beforehand, thus anticipating the value generation from using this new technology by months.
The “AI Frontrunner” role could have various positions within the company but presumably, it would make sense to place it within a technical team that, among other responsibilities, develops AI applications, because it would have greater technological sensitivity, easier access to key vendors, and more direct visibility into business priorities, observing the development team's product roadmap.
Moreover, this role should maintain constant relations with other company's functions more involved in innovation and market research, as these typically offer a longer-term view that, appropriately filtered through the lens of the above-mentioned short/medium-term roadmap, could provide necessary insights to positively influence it and introduce innovative elements.
Lastly, due to its role, this person would be ideal for initiating discussions with control functions, ensuring timely reflections on how to regulate the use of the new technology when it becomes available. To be effective, the initiation of these discussions could occur by reporting observations to a higher “AI Governing Body,” equipped with top management sponsorship, which would have, among others, decision-making power on the evolution of AI in the company, as well as activating timely reflections to be well-prepared by the availability date of the new capability.
This role could be full-time or not, depending on the number of AI products in the pipeline and the company's willingness to invest in AI. However, it would surely become an enabling figure in accelerating the adoption process of new technologies which, as discussed above, will increasingly be the true competitive factor.
I leave here below the links to some additional article that you may find interesting:
Building UAE/KSA's best job site for software engineers
1 年Thanks for this article Maurizio Marcon. Something I think your "AI Frontrunner" would become cognizant of is this idea that data for training models is already outstripping demand (https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-dataset ). Companies are hungry for data and the incumbent large organisations have a firehose of data that innovators can only dream of getting access to. This is a real competitive advantage. If an AI Frontrunner was working in an enterprise scale organisation today, then in my mind, their first priority would be in organising data so that it is actionable and ready to be used for whatever purpose the org wants to explore.
Head of Design, Design Lead at ElifTech
1 年A very thought-provoking analysis of the challenges and opportunities that arise in the fast-evolving landscape of AI and technological progress!