Welcome to the December 2023 edition of the AI & Partners Newsletter.? Our tailor-made monthly newsletter by AI & Partners updating you on the latest developments and analyses of the proposed EU Artificial Intelligence Act (the “EU AI Act”). ?As thought, this month has been an cacophony of activity, straddling all manner of legislative, commercial and operational matters, with a final push ahead of January 2024 – the expected enactment date of the EU AI Act. With the final round of trilogue talks happening early this month, already high expectations are increasing. This issue examines all manner of things from trilogue discussions to potential legislative action. Our objective is to remain ahead of the curve to keep you informed.
AI’s long-term potential for sustained value creation remains uncontested. Now, and in the foreseeable future, we base our services around helping firms achieve regulatory excellence for the EU AI Act.? We hope you find this content beneficial.?
As always, if you have any comments or recommendations for content in future editions or would like to contribute content, please let us know: [email protected].
What’s the latest with the legislative process?
- According to Euractiv's Luca Bertuzzi, the EU is in the final stages of negotiating the AI Act. The recent trilogue on 24 October among the Council, Parliament, and Commission reached consensus in classifying high-risk AI applications and overseeing powerful foundation models, albeit with pending details on prohibitions and law enforcement. Despite a negative legal review from the Parliament's office, the proposal on classifying high-risk AI remained largely the same. The tiered approach on foundation models seems to have broad support, but how exactly to define the top tier of 'very capable’ foundation models remains a challenge. A political agreement is expected at the next trilogue on 6 December but not guaranteed. Disagreements persist, notably on which AI applications should be prohibited and what exceptions should be left to law enforcement agencies.
- Under the Hiroshima AI process, the G7 leaders have reached an agreement on International Guiding Principles and a voluntary Code of Conduct for AI developers. The EU supports these principles alongside the ongoing creation of legally binding rules within the AI Act. These international standards aim to complement the EU regulations, uphold similar values, and ensure trustworthy AI development. The eleven principles aim to provide direction for the responsible development, deployment, and use of advanced AI systems such as foundation models and generative AI. They include commitments on risk and misuse mitigation, responsible information sharing, incident reporting, cybersecurity investment, and a labelling system for AI-generated content. The principles were developed jointly by the EU and other G7 members and have subsequently formed the basis for a detailed practical guidance for AI developers.
- According to Euractiv's Luca Bertuzzi, the whole AI Act may be in jeopardy. On Friday, 10 November, negotiations broke down as larger member countries sought to retract the proposed approach for foundation models. The dispute revolves around how to regulate AI models like OpenAI's GPT-4, which powers the popular ChatGPT. A consensus emerged in the previous trilogue to implement tiered rules for these models, emphasising stricter regulations for the most impactful ones currently developed by non-European companies. However, opposition from major European countries, notably France, Germany, and Italy, has since intensified. Both French AI startup Mistral, led by former digital state secretary Cedric O, as well as Germany's Aleph Alpha, closely connected to the German establishment, fear and oppose regulation. Unless it is resolved soon, this deadlock poses a risk to the entire AI legislation.
- As a reminder for everyone who has not followed the AI Act development process from the start, here are the key stages with regards to the regulation of general purpose AI systems (GPAIS) and foundation models. In April 2021, the European Commission's original draft of the Act did not mention GPAIS or foundation models. In August 2021, the Future of Life Institute (where I work) and a handful of other stakeholders provided feedback to the Commission that the draft did not address increasingly general AI systems such as GPT-3 (the state of the art back then). In November 2021, the Council led by Slovenia introduced an Article 52a dedicated to GPAIS, stating that GPAIS shall not by themselves only be subject to the regulation. In March 2022, the JURI committee in the European Parliament essentially copied these same provisions into their position. In May 2022, the Council, then led by France, substantially modified the provisions for GPAIS by requiring such systems, which may be used as high risk AI systems or as components of high risk systems, to comply with select requirements. In November 2022, a Czechia-led Council adopted their position which stated that GPAIS which may be used as high risk AI systems or as components of high risk AI systems shall comply with all of the requirements established in the chapter on requirements for high-risk AI systems. In June 2023, the Parliament adopted their position and introduced Article 28b, with obligations for the providers of foundation models regardless of how they are distributed.
- In more Euractiv coverage from 9 November, the AI Act was making progress with proposed criteria for identifying powerful foundation models. The Spanish presidency circulated a draft on 7 November offering obligations for foundation models, including the most powerful or 'high-impact' ones. Leading MEPs suggested initial criteria for determining the most impactful models, including data sample size, model parameter size, computing resources, and performance benchmarks. They advised that the Commission should develop a methodology to assess these thresholds and adjust them when the technological development warrants. Suggested obligations for high-impact models included registration in the EU public database and assessing systemic risks. MEPs advocated for the AI Office to publish yearly reports on recurring risks, best practices for risk mitigation, and a breakdown of systemic risks.
- On 7 November, Euractiv stated that foundation model governance in the EU’s AI law is starting to take shape. The Spanish presidency put forth a governance architecture for overseeing obligations on foundation models and high-impact foundation models. The Commission, via implementing acts, would define procedures for monitoring foundation model providers, outlining the AI Office's role, appointing a scientific panel, and conducting audits. Audits may be performed by the Commission, independent auditors, or vetted red-teamers with API access to the model. The proposed governance framework includes the AI Office and a scientific panel for regular consultations with the scientific community, civil society, and developers. The panel's tasks encompass contributing to evaluation methodologies, advising on high-impact models, and monitoring safety risks. In cases of non-compliant AI systems posing significant EU-level risks, the Commission can conduct emergency evaluations and impose corrective measures.
- According to Euractiv's Luca Bertuzzi, MEPs negotiating the AI Act stand by tighter regulations for powerful AI models, like OpenAI's GPT-4. There was previously consensus on a tiered approach with broad obligations for all foundation models and additional requirements for those posing systemic risks. France, Germany, and Italy then decided to disagree and oppose obligations on foundation models broadly. The European Parliament insists on obligations for developers of the most powerful models, introducing a working paper with binding requirements. These include internal evaluation and testing, cybersecurity measures, technical documentation, and energy-efficiency standards. The obligations would apply solely to original developers of models with systemic risk such as OpenAI and Anthropic, not the downstream developers that refine the model. The AI Office would oversee compliance and impose sanctions for breaches. The parliamentarians accept the idea of EU codes of practice but only to complement the horizontal transparency requirements for all foundation models. Criteria for designating models with systemic risk include capabilities, number of users, financial investment, modalities, and release strategies, rejecting a single quantitative threshold proposed by the Commission.
- The European Commission has introduced the AI Pact, encouraging companies to voluntarily commit to implementing measures outlined in the AI Act before the legal deadline. Some AI Act provisions will take effect shortly after adoption, while others, particularly for high-risk AI systems, will apply after a transitional period. The AI Pact seeks early industry commitment to anticipate and implement AI Act requirements, addressing concerns about the rapid adoption of generative and general-purpose AI systems. Companies can pledge to work toward compliance, outlining the processes and practices they are planning or already putting in place. The Commission will collect and publish these pledges, fostering transparency and credibility. The AI Pact aims to create a community of key EU and non-EU industry players to exchange best practices with the aim of increasing awareness of the future AI Act principles. Interested organisations can now express their interest in participating, with the formal launch expected after the Act's adoption.
What do the latest analyses state?
- Cristina Gallardo, a Senior Reporter at Sifted, wrote that Spain's AI and Digitalisation Minister, Carme Artigas, has urged calm among AI startup founders in response to the AI Act. Founders worry that the legislation might hinder Europe's competitiveness compared to global rivals like the US and China. Artigas emphasises that the Act's primary goal is not to stifle but to foster innovation, offering a two-year adaptation period and national sandbox initiatives to support companies. She highlights the extensive three-year contemplation behind the rules, assuring that no dimension has been overlooked. Negotiations for the final text continue, primarily focusing on categorising systems, law enforcement use of AI applications, regulating foundational models, and ensuring the Act remains relevant amidst AI's rapid advancements. Artigas underscores the Act's adaptability, intending to introduce mechanisms for timely updates and ensure it does not become obsolete. Despite challenges, she remains confident in reaching an agreement on the legislation by the end of the year.
- Riesgos Catastróficos Globales (RCG) published a position paper on the AI Act trilogue presenting six recommendations for policymakers. It focuses on regulating frontier models and the recommendations include proposals to define frontier models, their evaluation, risk management systems, deployment safeguards, establishing an AI Office, and ensuring compliance of open-source models. More concretely, the paper suggests 1) third-party model evaluations and testing, 2) risk management throughout the lifecycle of the frontier model, 3) various safeguards such as monitoring any instances of serious malfunction, incidents or misuse, and prevention and contingency plans, 4) an independent AI Office to oversee evaluations and assess large-scale risks, 5) that providers of frontier models must comply with the regulation, irrespective of whether they are provided under free and open-source licenses.
- The European Consumer Organisation (BEUC) expressed concern about the potential adoption of an ambiguous and inadequate approach for regulating generative AI systems like ChatGPT or Bard within the EU. BEUC emphasises the necessity of a robust legal framework to protect consumers from the risks posed by generative AI, such as manipulation, dissemination of false information, privacy violations, increased fraud and disinformation, and reinforcement of biases. The proposed approach for determining which generative AI systems fall under specific obligations is criticised as unclear and complex. This ambiguity creates uncertainty for regulators, consumers, and companies falling within the law's scope. BEUC highlights the risk that perhaps only AI systems developed by large companies will be adequately regulated, leaving a substantial number of systems subjected only to weak transparency requirements, and inadequately protecting consumers in numerous scenarios.
- Creative Commons, Communia Association and Wikimedia Europe published a statement advocating for a balanced and tailored approach to regulating foundation models and for transparency within the AI Act more generally. They commend the Spanish presidency's consideration of a more tailored approach to foundation models. The statement stresses the importance of maintaining flexibilities for using copyrighted materials as AI training data, maintaining a delicate balance between users' rights and the necessities of scientific research and innovation. Moreover, the statement calls for a proportionate approach to transparency obligations, recommending fewer burdens on smaller players such as non-commercial actors and SMEs. Finally, it expresses concerns about the lack of clarity on the copyright transparency obligation – including on the scope and content of the obligation to provide training data summaries – and urges clearer guidelines for effective implementation through an accountable entity like the proposed AI Office.
- Kris Shrishak, a Senior Fellow at the Irish Council for Civil Liberties, published an op-ed in Euractiv emphasising the need for greater regulatory empowerment within the AI Act. The current proposal focuses primarily on self-assessment by companies, lacking third-party assessments for most high-risk AI systems, notably those used in education, employment, and law enforcement contexts. Shrishak indicates that the current draft could burden regulators with inadequate powers and tools to enforce the regulation effectively. The absence of 'remote investigation' powers, the limitations for accessing AI system source codes, the insufficient computational resources, and the necessity for more skilled personnel within regulatory bodies are among Shrishak's primary concerns. Shrishak advocates for enhancements to empower regulators with remote investigation capabilities, simplified access to AI source codes during investigations, broader access to AI models beyond mere API, and a larger skilled workforce to enforce the legislation.
- Two doctoral students from Germany, Anton Leicht and Dominik Hermle, argued in a blog post that the recent criticism about the EU AI Act's foundation model regulation on economic grounds is misleading, as a strong regulatory focus on foundation models would be highly economically beneficial to the EU. Leicht and Hermle state that the foundation model regulation is criticised – notably by France and Germany – for potentially impeding European foundation model development. Concerns center on economic competitiveness against global AI leaders like OpenAI, Google, and Meta. While EU providers Aleph Alpha and MistralAI have lately secured investments in the hundreds of millions, they trail counterparts like GPT-3.5 and GPT-4 in both performance and applications. Aleph Alpha's and Mistral's best models perform at the level of GPT-3 and Meta's LlaMa-2 weakest version. Considering their lack of computational resources, funding, data and talent, these providers are judged to be several years of development behind the global leaders, with minimal chance of catching up. However, foregoing comprehensive foundation model regulation risks burdening the potentially vast market of downstream deployers, leading to economic peril and increased compliance costs for them.
- Natasha Lomas at Tech Crunch wrote that the AI Act negotiations are at a critical stage. Talks, described as "complicated" and "difficult" by MEP Brando Benifei, are particularly contentious regarding the regulation of generative AI and foundation models. Heavy industry lobbying, especially by French startup Mistral AI and German firm Aleph Alpha, has resulted in French and German opposition to MEPs' proposals for foundation model regulation. Lobbycontrol, an EU and German lobby transparency nonprofit, accuses Big Tech of lobbying for a laissez-faire approach, undermining AI Act safeguards. Mistral CEO Arthur Mensch denies blocking regulations but emphasises that regulations should focus on applications, not infrastructure. The outcome remains uncertain, with the risk of an impasse if member states resist accountability for upstream AI model makers.
- Bram Vranken, Researcher and Campaigner at the Corporate Europe Observatory, published an op-ed on Social Europe about how Big Tech companies are using intense lobbying efforts to derail the AI Act, pushing for advanced AI systems, known as 'foundation models', to go on unregulated. Vranken argues that tech corporations like Google and Microsoft investing billions in partnerships with startups contributes to a near-monopoly. The Parliament aimed to impose obligations on companies developing foundation models, such as mitigating risks to fundamental rights, checking the quality of the data used to train these AI systems against any biases, and lowering their environmental impact. However, behind closed doors, tech firms have resisted such regulations – despite publicly calling for AI regulation. Lobbying by Big Tech has increased; this year, 66% of AI-related meetings involving Parliament members were with corporate interests. CEOs of Google, OpenAI, and Microsoft have engaged with high-level EU policymakers, and 86% of high-level Commission officials' meetings on AI have been with industry. AI startup Mistral AI has joined the lobbying campaign, with the former French Secretary of State for Digital Transition, Cédric O, in charge of EU relations. French, German, and Italian officials also met tech industry representatives to discuss cooperation on AI, after which they started echoing Big Tech's push for innovation-friendly regulations.
- Rishi Bommasani, the Society Lead at the Stanford Center for Research on Foundation Models, wrote an overview of possible approaches to categorise different foundation models. Several governments, including the US and the EU, are contemplating tiered regulations for foundation models, taking into account the impact and potential harm they may cause. Bommasani emphasises that tiers should be determined by demonstrated impact, with scrutiny increasing for models that have a greater societal impact or pose more significant risks. However, the measurement of impact is challenging, as foundation models are not directly used by the public. Bommasani suggests two potential routes forward: tracking applications dependent on a given foundation model and counting the aggregate number of users across downstream applications. He also raises the possibility of hybrid approaches – which the Parliament has recently considered – of integrating different tiering strategies for more robust regulation.
- Computer & Communications Industry Association published an explainer of foundation models. The text defines 'AI foundation models' as models trained on broad data with self-supervision capabilities, enabling adaptation to various downstream tasks. These models, a subset of general-purpose AI, power numerous applications like text generation, accessibility, innovation, education, data analysis, research, and automation. Prominent examples include OpenAI's GPT-3.5 and GPT-4, Google's PaLM2, Meta's Llama2, and Amazon's Titan. The rapid deployment of such tools has prompted debates on AI policy, with a consensus on addressing risks like bias, safety, cybersecurity, and privacy. The text suggests that rules for foundation models should be technology-neutral, focus on high-risk uses, maintain exemptions for developers, have balanced and implementable rules, avoid unnecessary copyright requirements, streamline responsibilities along the value chain, and establish a fair implementation timeline under the AI Act.
- Equinet and ENNHRI jointly issued a statement urging policymakers to enhance protection for equality and fundamental rights within the AI Act. Their recommendations include ensuring a robust enforcement and governance framework for foundation models and high-impact foundation models, incorporating mandatory independent risk assessments, fundamental rights expertise, and stronger oversight. They also emphasise legal protection for high-risk systems, effective collaboration between the AI Office, national supervisory authorities and independent public enforcement mechanisms, and a redress mechanism for AI-enabled discrimination victims. The statement advocates for mandatory fundamental rights impact assessments for AI system deployers, a ban on biometric and surveillance practices that pose unacceptable risks to equality and human rights, and the prohibition of predictive policing for criminal and administrative offences due to its potential to embed structural biases and over-police certain groups of people.
If you have any interaction with AI systems, such as use, marketing, development, importing, distribution or development, including those categorised as high-risk, you may be within scope.? Contact us to receive more information on this matter.
What is coming up next?
Final Trilogue Discussions – 6 December 2023
Negotiators have considered several other contentious elements of the AI Act that remain open, including developing and use of foundation models and general-purpose AI. The issues that were not resolved during the mentioned trilogue session were pushed to the next trilogue session scheduled for December 6. This ambitious schedule may delay the adoption of the AI Act to 2024, although the timetable remains unclear.
The Spanish presidency of the EU Council has repeatedly maintained that it plans to come to full agreement of the AI Act by the end of 2023, which will make the December trilogue meeting a high-stakes affair. Nine “technical meetings” of the trilogue negotiators are scheduled to find common ground on the AI Act’s most complex and consequential aspects. The negotiators are considering open issues to form a “package deal” for December 6 that would address compromises to proposed bans of high-risk AI systems, law enforcement exceptions, the fundamental rights impact assessment, and sustainability provisions.
Failure to reach full agreement on these issues could push negotiations to early 2024, increasing the risks of additional delays due to the June 2024 election for European Parliament representatives.
What AI & Partners can do for you
Providing a suite of professional services laser-focused on the EU AI Act
- Providing advisory services: We provide advisory services to help our clients understand the EU AI Act and how it will impact their business.? We do this by identifying areas of the business that may need to be restructured, identifying new opportunities or risks that arise from the regulation, and developing strategies to comply with the EU AI Act.
- Implementing compliance programs: We help our clients implement compliance programs to meet the requirements of the EU AI Act.? We do this by developing policies and procedures, training employees, and creating monitoring and reporting systems.
- Conducting assessments: We conduct assessments of our clients' current compliance with the EU AI Act to identify gaps and areas for improvement.? We do this by reviewing documentation, interviewing employees, and analysing data.
- Providing technology solutions: We also provide technology solutions to help our clients comply with the EU AI Act.? We do this by developing software or implementing new systems to help our clients manage data, track compliance, or automate processes.
We are also ready to engage in an open and in-depth discussion with stakeholders, including the regulator, about various aspects of our analyses.
Our Best Content Picks for 2023
Milestone: Deloitte RegTech Universe – Listing
RegTech (Regulatory Technology) is more than a buzzword, it is a very real movement that is already having an impact on regulatory compliance. Discover our RegTech Universe, where we are compiling a list of RegTech companies along with the technologies and solutions they are offering.
We are honoured to announce our admittance to listing in October 2023.
Milestone: IAPP Privacy Vendor List - Listing
From legal advisers and insurance companies to information technology services and software, businesses must work with a large collection of vendors from a variety of disciplines to reach their privacy goals. The ever-growing IAPP Privacy Vendor List offers information on organizations that can help you protect data, meet regulatory requirements, respond to breaches, set policies and more. This list aims to serve as a complimentary resource for IAPP users.
We are honoured to announce our admittance to listing in October 2023.
Partnership: OCEANIS
Introducing @OCEANIS, the Open Community for Ethics in Autonomous and Intelligent Systems
A Global Forum for discussion, debate and collaboration for organizations interested in the development and use of standards to further the development of autonomous and intelligent systems.
Working together to enhance the understanding of the role of standards in facilitating innovation while addressing problems that expand beyond technical solutions to addressing ethics and values.
@AI & Partners is delighted to announce that it is a member of @OCEANIS. @AI & Partners will help provide a high level global forum for discussion, debate and collaboration for organizations interested in the development and use of standards to further the development of autonomous and intelligent systems.
#AI #Autonomous #Ethics #Community #Open #Intelligent #System
Report: AI & Partners: Global AI Benchmarking Study
The world of technology is transforming before our eyes. Artificial intelligence (“AI”) is creating new paradigms for economic activity and forging alternative conduits of value creation. AI & Partners, since its founding in 2021, has been at the forefront of documenting, analysing and indeed critically challenging that technological transformation.
This Global AI Benchmarking Study is our inaugural research focused on AI. Led by Sean Musch and Michael Borrelli, it is the first study of its kind to holistically examine the burgeoning global AI industry and its key constituents, which include AI Services, AI Products, AI Infrastructure, AI Adopters, and AI Ancillary.
The findings are both striking and thought-provoking. First, the user adoption of AI has really taken of, with billions in investment and thousands of companies estimated by 2023. Second, the AI industry is both globalised and localised, with borderless operations, as well as geographically clustered infrastructure activities. Third, the industry is becoming more fluid, as the lines between services and products are increasingly ‘blurred’ and a multitude of AI types, not just generative AI, are now supported by a growing ecosystem, fulfilling an array of functions. Fourth, issues of privacy and regulatory compliance are likely to remain prevalent for years to come.
We hope this study will provide value to academics, practitioners, policymakers and regulators alike.
Report: Tech UK - WRC and the UK: Navigating AI’s Wireless Frontier
The World Radiocommunication Conference (“WRC”) stands as a crucial event on the international technology landscape, with implications that stretch beyond its immediate radiocommunication focus. The United Kingdom (“UK”), in particular, has a vested interest in the outcomes of WRC, given its significant impact on various sectors and, notably, the pressing issue of artificial intelligence (“AI”) and the implications of the European Union (“EU”) AI Act (the “EU AI Act”).?
Report: AI & Partners – What is Know Your AI (AI)?
Know Your AI (“KYAI”) is the process firms use to verify their artificial intelligence (“AI”) systems, their risk levels, and inform compliance risk assessments. KYAI is a foundation of European Union (“EU”) AI Act (the “EU AI Act”) compliance in jurisdictions worldwide. Given its regulatory importance, firms should understand how to implement KYAI effectively.
With adoption of AI systems on the rise, KYAI policies have evolved to bolster risk management and mitigate the risk of harms to individuals’ safety, health, fundamental rights, and democracy.. Effective KYAI protects firms providers from costly compliance penalties, criminal liability, and reputational damage and safeguards individuals who may otherwise fall victim to unethical and/or untrustworthy AI.
Event: SumSub – The Roadmap to AI Regulation
At the next #SumsubMultiverse on November 23, we participated in a debate on “Balancing Innovation and Compliance Risks with AI”.
The debate featured the following? experts:
? Greg Wlodarczyk - Head of Specialist Financial Crime Advisory & Virtual Assets, FINTRAIL
? Michael Charles Borrelli - Co-CEO/COO at AI & Partners
? Denis Nwanshi, MBA - CEO at NetraScale
Here’s what’s on the agenda:
When drafting #AI regulations, what's more important: innovation, safety, or compliance?
Can AI be regulated without stifling innovation?
Is fintech self-regulation a viable alternative to government oversight? Are any jurisdictions pursuing it?
You'll find more information: https://lnkd.in/eqeparYB
Event: Insurance Innovators Summit
What an exhilarating journey we've had together! As we wrap up this transformative event, we wanted to share some of the key highlights and takeaways.
Exploring Cutting-Edge Insights. The summit was a hub of groundbreaking ideas and insights. Renowned industry experts illuminated the latest trends, from AI-driven underwriting to blockchain-powered claims processing. These innovations promise to reshape the insurance landscape.
Navigating the AI Revolution. AI took centre stage, and rightfully so. Discussions on harnessing the power of artificial intelligence for predictive analytics and enhanced customer experiences captured our imagination. But, with great power comes great responsibility. We delved deep into AI ethics and compliance, ensuring that innovation goes hand in hand with ethical practices.
Fostering Collaboration. Networking sessions allowed for meaningful connections. You've expanded your professional network and forged partnerships that will propel your organizations forward.
Impacting Society. Our commitment to societal impact was evident. Sustainability in insurance emerged as a critical theme. Sessions on climate risk, ESG investments, and inclusive insurance reaffirmed our industry's role in creating a better future.
Gratitude. To our esteemed speakers, sponsors (e.g. Neural Magic), and all of you, thank you for making the event a resounding success. Your active participation, thoughtful questions, and spirited discussions enriched the event.
Stay Connected. Keep the momentum going! Join our online community to continue the conversations, access exclusive content, and stay updated on industry developments.
Legitimate Interest
As we will continue to communicate with UK and EU individuals on the basis of ‘legitimate interest’ if you are happy to continue to receive marketing communications covering similar topics as you have previously then no action is needed, else please unsubscribe using the Opt Out link below. ?We will process your personal information in accordance with our privacy notice.
Opt Out
If, however, you do not wish to receive any marketing communications from us in the future you can unsubscribe from our mailing list at any time, thank you.
DISCLAIMER
All information in this message and attachments is confidential and may be legally privileged. Only intended recipients are authorized to use it. E-mail transmissions are not guaranteed to be secure or error free and sender does not accept liability for such errors or omissions. The company will not accept any liability in respect of such communication that violates our e-Mail Policy.
Faith, Family, Security, Tech, IT Talent Modernization, Research, Disc Golf, Autism Awareness (all opinions are my own) #ChatManForMoms
1 年Very insightful but long. Was this written with AI? ??
C-Level HR | Transformation Leader | Board Advisor | Author | Business Coach | Organisational Consultant
1 年Thx for sharing, AI & Partners
Advancing human centred AI at Rejuve.AI under SingularityNET, ASI Alliance member. ??????
1 年Very interesting.