The Future of AI: Trust and Innovation Post-Davos
AI - the next big thing!?
Davos was all about AI as the "next big thing". This seems clear to everyone since the release of ChatGPT. Common comparisons to the recent impact of AI include: AI's "iPhone moment", the biggest revolution since the invention of the Mosaic browser or the biggest impact on human interaction since the internet fundamentally changed the way we communicate. Multi-Modal Large Language Models, such as ChatGPT and their successors, are seen in this light. However, there is still debate: Are we already in the impact phase or still in the hype phase? Regardless, AI and, above all, generative AI will fundamentally change an entire generation,entire societies even, and we must seize this opportunity?to shape the transformation the way we see fit! ?
Generative AI - let's make them dance?
What was described a few years ago with the even more technical term of machine learning has been shortened to two letters: AI. One thing seems to be clear. Everyone wants it. Everyone loves it. Even those who don't yet want it or love it will have to want it and learn to love it. It is therefore important to bring generative AI applications to the market as quickly as possible, convince business customers of the need for customisation and impress them with convenience. Many companies demonstrate how far they have already come by showing how business processes are reorganised from the ground up in a digital and automatised fashion. But what about trustworthiness??
Trustworthy AI - the next big thing??
领英推荐
To be widely accepted AI must be trustworthy. This includes two key aspects: first, a technical component, which guarantees the program is executed as intended, is robust against adversarial attacks or prevents harming users through false outputs. Second, a non-technical component, which ensures applications are documented in a comprehensible manner and assesses if fundamental rights are violated. Common terms such as trustworthy, ethical or responsible are essential when discussing these components. Notably, many experts agree that these dimensions of trustworthiness ultimately lead to higher quality models. Although terms such as transparency, explainability and human-machine interaction are frequently discussed on panels, there is still a lack of concrete best practices being followed and proof of concepts being shared. However, since the introduction of the AI Act, prioritising and realising quality high-risk models will become essential. Developing methods, tools and processes for effective and systematic certification on a large scale is crucial. We are glad to call the TüV companies our shareholders, who have one of the strongest track records globally, when it comes to trustworthiness, quality and safety. ?
TüV AI.Lab - Certifying the future?
The TüV AI.Lab was launched in order advance trustworthy AI. Our core objective is to translate the requirements of the AI Act and other essential laws in this space into concrete testing practice. The goal is clear: leverage the TüV companies’ longstanding success in the field of certification and transfer their extensive expertise into the digital era, positioning TüV as leaders in AI certification. We are currently participating in various projects where we develop test methods for various use cases to reliably certify AI applications, as major steps towards our core objective. We aim to make a significant contribution to shape the AI era in a European way, by collaborating with a dynamic ecosystem of research institutions, companies, non-governmental organisations and policymakers, all pursuing the same goal: trustworthy AI. ?
Missed the panel? No problem!
You can find panel discussion on AI Standardization and Governance with our CEO Franziska Weindauer as well as Sebastian Hallensleben , Andrew Ng , Gary Marcus and Laure Willemin by clicking here (YouTube)