The Layering Problem – Can AI Get Worse?

The Layering Problem – Can AI Get Worse?

Hello and welcome to the latest edition of?AI Strategy Brief. In this edition we’ll review some recent industry developments in AI, and consider The Layering Problem for businesses developing or using AI Applications.



AI Industry Updates:

  • Sector Wide – Scalable AI Is Here
  • Technology – New AI Competitors On The Horizon
  • Regulation – EU AI Act Moves Forward



Sector Wide – Scalable AI Is Here

Meta has launched?Llama 2 ?in partnership with Microsoft through Azure,?Bard ?is now available in Europe, Claude 2 has been made available by?Anthropic ?(in the US & UK so far), and Code Interpreter is now in general beta release and available to all?OpenAI ?premium subscribers. With Microsoft’s Bing AI also available through the Edge browser, never before have so many AI powered research and composition tools been available to pretty much everyone. Combined with advances in more industry specific AI, as we’ve seen in financial and health care sectors already, useable and scalable AI is now here and open to staff and companies in every industry. I’d estimate early adopters will have AI Strategies and Governance Systems up and running within 3 - 4 quarters and be able to directly integrate AI capabilities into their business processes within that time. This means we are probably only a year away from seeing AI-Derived Competitive Advantages across all sectors and industries.?



Technology – New AI Competitors On The Horizon

According to some inside sources,?Apple is now developing more advanced AI . Insiders have?claimed that apple has quietly begun working on a new framework to develop?LLMs, called Ajax.?While Apple is relatively late to the game in terms of AI development, their Siri program notwithstanding, the resources and innovation capabilities they bring will undoubtedly lead to some interesting advances.?Meanwhile, the AI scene has been captivated with the entrance of a new player -?XAI , Elon Musk's latest venture. If you're wondering what XAI is, it's shorthand for eXplainable Artificial Intelligence. The goal? To engineer AI that's not just powerful but also explainable - a?key short-term objective in the field.?XAI is already demonstrating its ambitious agenda and taking on projects aimed at developing AI that can make sense of, and elucidate, its own decision-making processes. Moreover, it seeks to craft AI capable of untangling complex problems in key sectors such as healthcare, finance, and transportation, all the while ensuring this AI is safe and abides by ethical standards. Musk's reasoning behind establishing XAI? His longstanding advocacy for ethical and safe AI development. He's been an outspoken critic of the potential misuse of AI, and XAI is his way of tangibly addressing these concerns, while also entering the fastest growing technology space of this century so far.?



Regulation – EU AI Act Moves Forward

Significant progress was made in the EU Trilogue meeting this week, a pivotal gathering of the European Commission, the European Parliament, and the Council of the European Union, where AI was the star of the show. The meeting, which took place on Tuesday the 18th?of July, delved into the nitty-gritty of AI, discussing at length its definition, the high-risk AI categories, and the controversial topic of remote biometric identification. The consensus is that the meeting was constructive, and considerable headway was made on several aspects. But let's not get ahead of ourselves, there are still challenging issues that need to be ironed out before the AI Act reaches its final form. The most contentious of these? Remote biometric identification. The European Parliament is staunchly advocating for its outright ban, while the Council of the European Union believes it should be permissible under specific, limited circumstances. Don't expect a resolution overnight; these discussions and negotiations are expected to roll on for some time. The target? To wrap up a final agreement on the AI Act by the close of 2023. This is one to watch though, as the impacts of the legislation on every sector could be very significant in scope.



The Big Question: The Layering Problem – Can AI Get Worse?

We take for granted that technology advancement is a one-way vector. That the technology we have today, is invariably inferior to the technology we will have in the future. However, researchers from Stanford and Berkley have recently discovered that AI?programs like ChatGPT-4 have gotten worse ?at performing some tasks over the last few months, as new updates have been rolled out. What does this mean for AI and what does it mean for any business trying to develop or integrate AI Applications into their business processes?


The AI We Have Today

Professor Ethan Mollick of Wharton University is one of the leading researchers of AI Applications today. Luckily for us, he is also one of the most prolific sharers of his initial exploration and research. Professor Mollick is fond of saying that the AI we have today, is the worst AI we will ever have. I agree with him. Given the rate of advancement we are seeing in AI, its growing capabilities should not be underestimated by anyone.?While AI advancement seems to be going only one way, this is speaking in general terms, and the situation is different when we start to consider unique downstream use cases.?Prof. Mollick has also drawn attention to the manner in which rolling updates to AI models like GPT-4, have made its downstream applications better and worse at certain tasks. This is highly problematic for any AI Application developers, and as we are beginning to realise, for the businesses that use them.


Models & Applications – The Layering Problem’s Origins

To understand this, we?first need to appreciate the relationship that exists between underlying AI Models, and the AI Applications that are built on top of them. To explain this, I'll take the models and applications of OpenAI as an example. GPT-4 is a Large Language Model. ChatGPT-4 is an AI Application with a chat bot interface, built on top of the LLM lying underneath. Other applications such as the marketing tool Jasper AI are second-degree applications sitting on top of this architecture as well.?The relationships between AI Model Builders and AI Application Developers are at the core of the AI Innovation Ecosystem we see emerging across industries today, but also the cause of the Layering Problem organisations are now confronted with.??When a change is made in an AI Model, that change instantly affects all AI Applications and further layers built upon the underlying model. However, changes made in underlying models are rolled out constantly by companies like OpenAI, without any communication or explanation. And added to that, these changes can have unknown and unforeseen effects at the AI Application level. This is the essence of what I call the Layering Problem of AI.


Developing Blind – The Layering Problem & AI Applications

Understanding that change in an underlying AI Model can create knock-on effects and changes for any AI Application built upon it, it is important to differentiate between the two critical aspects of this problem. Firstly, there is a challenge in that changes are being rolled out by companies like OpenAI without any communication or explanation. This is likely down to AI Model Builders wanting to protect their competitive advantages. Additionally, there is a second problem around the unforeseen consequences of changes made in underlying LLMs.?AI Model Builders do not consider downstream use cases when updating their models, instead focusing on how to optimise the models themselves under their development parameters.?This is why AI is best understood as a General-Purpose Technology. Downstream AI Application Developers are taking these General Purpose Technologies, and building AI Applications on top of them, with the view to solve or serve certain specific use cases like marketing, content creation, education etc. Given this disconnect between AI Model Builders an AI Application Developers, it is not easy to anticipate how changes to underlying AI Models will affect downstream AI Applications.?


Uncertain Capabilities – The Layering Problem For Organisations

Going a further step downstream in the AI Value Chain, we then have to consider then businesses that wish to integrate AI Applications into their business processes. Studies have already demonstrated the potential efficiency, productivity and creativity gains that AI Applications offer. The potential ROIs here cannot reasonably be ignored, but this is not a one-sided calculation.??Organisations seeking to implement and integrate AI Applications into their frontline and support activities must also be aware of the Layering Problem and the risks it poses.?AI Application Users are further downstream from the changes occurring in underlying AI Models, and have less insight into how these changes will affect their downstream capabilities. For businesses this represents a significant and real strategic risk that is present today. At the same time leaders cannot ignore the potential ROI AI Applications offer, lest they fall behind their competitors who embrace and integrate AI to their own competitive benefit.?


Complexity, Governance & Dynamic Explainability

So, how can we solve the AI Layering Problem? There is no easy answer. Constant updates to underlying AI Models will continue to create unforeseen changes at the AI Application level. For organisations looking to use AI Applications in support of their business processes this is a serious problem, as changes in underlying models will create unforeseen and perhaps undetected alterations in their processes and performance over time. Can companies ignore AI then? Unfortunately, no, this problem cannot be avoided by ignoring AI. Any companies that avoid integrating AI Applications into their processes will quickly lose ground to competitors who do. The benefits in productivity derived from AI Application use alone would create a competitive gap that is too difficult for any even-sized competitor to close.?The only way organisations can counter The Layering Problem, is through the inclusion of an adaptable monitoring measure in their AI Strategy and Governance Systems, that can offer Dynamic Explainability.?To date there has been much conversation about AI Explainability, and it is at the heart of what companies like XAI and many advisory businesses (mine included) are developing to aid clients. The Layering Problem only exacerbates this need though, and goes farther in the demands placed on AI Explainability than we have up to now understood. The Layering Problem means Static Explainability will not suffice. Instead, Governance and Strategy Systems must be developed to handle the needs of Dynamic Explainability.



Leadership Takeaways:

  • Generative AI capabilities are now widely available to all staff and all industries in a variety of forms.?
  • New AI competitors are entering the space, which will further spur and accelerate the innovation trajectory of AI.
  • The EU AI Act is moving forward, though areas of dispute remain between Trilogue partners.
  • Research has shown that changes in AI Models are altering their downstream capabilities, sometimes for the worse.
  • The Layering Problem relates to the nature of AI Applications developed on top of underlying AI Models, and the unforeseen consequences of rolling model updates.
  • AI Explainability is crucial to any AI Application Development or AI Process Integration.
  • The Layering Problem necessitates a form of Dynamic Explainability that should feature in AI Strategy and Governance Systems.?
  • A far greater burden will be placed on AI Strategy and Governance Systems than we first thought.?



That’s it for this week’s edition of?AI Strategy Brief. Thanks for reading and I hope you enjoyed this update and learned something new. If you found something of value, please subscribe and share this article with others. And if you have any questions, comments, or suggestions, I’d love to hear them in the comments section below.?


See you next time!

Thanks again Brian for the comprehensive, yet easy to digest, update on the cutting edge of today’s technology. Despite how novel this is, the old adage still seems to apply ‘one step forward (occasionally) two steps backwards’. As you recommend, governance, through closer vigilance, seems to be an appropriate approach.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了